y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#model-evaluation News & Analysis

93 articles tagged with #model-evaluation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

93 articles
AIBearisharXiv – CS AI · 1d ago7/10
🧠

Red Teaming Large Reasoning Models

Researchers introduce RT-LRM, a comprehensive benchmark for evaluating the trustworthiness of Large Reasoning Models across truthfulness, safety, and efficiency dimensions. The study reveals that LRMs face significant vulnerabilities including CoT-hijacking and prompt-induced inefficiencies, demonstrating they are more fragile than traditional LLMs when exposed to reasoning-induced risks.

AINeutralarXiv – CS AI · 1d ago7/10
🧠

Evaluating Relational Reasoning in LLMs with REL

Researchers introduce REL, a benchmark framework that evaluates relational reasoning in large language models by measuring Relational Complexity (RC)—the number of entities that must be simultaneously bound to apply a relation. The study reveals that frontier LLMs consistently degrade in performance as RC increases, exposing a fundamental limitation in higher-arity reasoning that persists even with increased compute and in-context learning.

AINeutralarXiv – CS AI · 2d ago7/10
🧠

Can Large Language Models Infer Causal Relationships from Real-World Text?

Researchers developed the first real-world benchmark for evaluating whether large language models can infer causal relationships from complex academic texts. The study reveals that LLMs struggle significantly with this task, with the best models achieving only 0.535 F1 scores, highlighting a critical gap in AI reasoning capabilities needed for AGI advancement.

AIBearisharXiv – CS AI · 2d ago7/10
🧠

Demographic and Linguistic Bias Evaluation in Omnimodal Language Models

Researchers evaluated four omnimodal AI models across text, image, audio, and video processing, finding substantial demographic and linguistic biases particularly in audio understanding tasks. The study reveals significant accuracy disparities across age, gender, language, and skin tone, with audio tasks showing prediction collapse toward narrow categories, highlighting fairness concerns as these models see wider real-world deployment.

AIBearisharXiv – CS AI · 2d ago7/10
🧠

Is There Knowledge Left to Extract? Evidence of Fragility in Medically Fine-Tuned Vision-Language Models

Researchers evaluated domain-specific fine-tuning of vision-language models (VLMs) on medical imaging tasks and found that performance degrades significantly with task complexity, with medical fine-tuning providing no consistent advantage. The study reveals that these models exhibit fragility and high sensitivity to prompt variations, questioning the reliability of VLMs for high-stakes medical applications.

🧠 GPT-5
AIBullisharXiv – CS AI · 2d ago7/10
🧠

How Many Tries Does It Take? Iterative Self-Repair in LLM Code Generation Across Model Scales and Benchmarks

Researchers demonstrate that modern large language models can significantly improve code generation accuracy through iterative self-repair—feeding execution errors back to the model for correction—achieving 4.9-30.0 percentage point gains across benchmarks. The study reveals that instruction-tuned models succeed with prompting alone even at 8B scale, with Gemini 2.5 Flash reaching 96.3% pass rates on HumanEval, though logical errors remain substantially harder to fix than syntax errors.

🧠 Gemini🧠 Llama
AIBullisharXiv – CS AI · 2d ago7/10
🧠

UniToolCall: Unifying Tool-Use Representation, Data, and Evaluation for LLM Agents

UniToolCall introduces a standardized framework unifying tool-use representation, training data, and evaluation for LLM agents. The framework combines 22k+ tools and 390k+ training instances with a unified evaluation methodology, enabling fine-tuned models like Qwen3-8B to achieve 93% precision—surpassing GPT, Gemini, and Claude in specific benchmarks.

🧠 Claude🧠 Gemini
AIBullisharXiv – CS AI · 2d ago7/10
🧠

SpatialScore: Towards Comprehensive Evaluation for Spatial Intelligence

Researchers introduce SpatialScore, a comprehensive benchmark with 5K samples across 30 tasks to evaluate multimodal language models' spatial reasoning capabilities. The work includes SpatialCorpus, a 331K-sample training dataset, and SpatialAgent, a multi-agent system with 12 specialized tools, demonstrating significant improvements in spatial intelligence without additional model training.

AIBearisharXiv – CS AI · 2d ago7/10
🧠

Grid2Matrix: Revealing Digital Agnosia in Vision-Language Models

Researchers introduce Grid2Matrix, a benchmark that reveals fundamental limitations in Vision-Language Models' ability to accurately process and describe visual details in grids. The study identifies a critical gap called 'Digital Agnosia'—where visual encoders preserve grid information that fails to translate into accurate language outputs—suggesting that VLM failures stem not from poor vision encoding but from the disconnection between visual features and linguistic expression.

AIBearisharXiv – CS AI · 2d ago7/10
🧠

Cross-Cultural Value Awareness in Large Vision-Language Models

Researchers have conducted a comprehensive study examining how large vision-language models (LVLMs) exhibit cultural stereotypes and biases when making judgments about people's moral, ethical, and political values based on cultural context cues in images. Using counterfactual image sets and Moral Foundations Theory, the analysis across five popular LVLMs reveals significant concerns about AI fairness beyond traditional social biases, with implications for deployed AI systems used globally.

AINeutralarXiv – CS AI · 2d ago7/10
🧠

From GPT-3 to GPT-5: Mapping their capabilities, scope, limitations, and consequences

A comprehensive comparative study traces the evolution of OpenAI's GPT models from GPT-3 through GPT-5, revealing that successive generations represent far more than incremental capability improvements. The research demonstrates a fundamental shift from simple text predictors to integrated, multimodal systems with tool access and workflow capabilities, while persistent limitations like hallucination and benchmark fragility remain largely unresolved across all versions.

🧠 GPT-4🧠 GPT-5
AINeutralarXiv – CS AI · 3d ago7/10
🧠

Medical Reasoning with Large Language Models: A Survey and MR-Bench

Researchers present a comprehensive survey of medical reasoning in large language models, introducing MR-Bench, a clinical benchmark derived from real hospital data. The study reveals a significant performance gap between exam-style tasks and authentic clinical decision-making, highlighting that robust medical reasoning requires more than factual recall in safety-critical healthcare applications.

AINeutralarXiv – CS AI · 6d ago7/10
🧠

Invisible Influences: Investigating Implicit Intersectional Biases through Persona Engineering in Large Language Models

Researchers introduced BADx, a novel metric that measures how Large Language Models amplify implicit biases when adopting different social personas, revealing that popular LLMs like GPT-4o and DeepSeek-R1 exhibit significant context-dependent bias shifts. The study across five state-of-the-art models demonstrates that static bias testing methods fail to capture dynamic bias amplification, with implications for AI safety and responsible deployment.

🧠 GPT-4🧠 Claude
AIBearisharXiv – CS AI · 6d ago7/10
🧠

Digital Skin, Digital Bias: Uncovering Tone-Based Biases in LLMs and Emoji Embeddings

Researchers conducted the first large-scale study comparing bias in skin-toned emoji representations across specialized emoji models and four major LLMs (Llama, Gemma, Qwen, Mistral), finding that while LLMs handle skin tone modifiers well, popular emoji embedding models exhibit severe deficiencies and systemic biases in sentiment and meaning across different skin tones.

🧠 Llama
AINeutralarXiv – CS AI · Apr 77/10
🧠

Justified or Just Convincing? Error Verifiability as a Dimension of LLM Quality

Researchers introduce 'error verifiability' as a new metric to measure whether AI-generated justifications help users distinguish correct from incorrect answers. The study found that common AI improvement methods don't enhance verifiability, but two new domain-specific approaches successfully improved users' ability to assess answer correctness.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Robust LLM Performance Certification via Constrained Maximum Likelihood Estimation

Researchers propose a new constrained maximum likelihood estimation (MLE) method to accurately estimate failure rates of large language models by combining human-labeled data, automated judge annotations, and domain-specific constraints. The approach outperforms existing methods like Prediction-Powered Inference across various experimental conditions, providing a more reliable framework for LLM safety certification.

AINeutralarXiv – CS AI · Mar 177/10
🧠

LLMs as Signal Detectors: Sensitivity, Bias, and the Temperature-Criterion Analogy

Researchers applied Signal Detection Theory to analyze three large language models across 168,000 trials, finding that temperature parameter changes both sensitivity and response bias simultaneously. The study reveals that traditional calibration metrics miss important diagnostic information that SDT's full parametric framework can provide.

AIBearisharXiv – CS AI · Mar 167/10
🧠

Diagnosing Retrieval Bias Under Multiple In-Context Knowledge Updates in Large Language Models

Researchers identify a significant bias in Large Language Models when processing multiple updates to the same factual information within context. The study reveals that LLMs struggle to accurately retrieve the most recent version of updated facts, with performance degrading as the number of updates increases, similar to memory interference patterns observed in cognitive psychology.

AIBearisharXiv – CS AI · Mar 167/10
🧠

OffTopicEval: When Large Language Models Enter the Wrong Chat, Almost Always!

Researchers introduced OffTopicEval, a benchmark revealing that all major LLMs suffer from poor operational safety, with even top performers like Qwen-3 and Mistral achieving only 77-80% accuracy in staying on-topic for specific use cases. The study proposes prompt-based steering methods that can improve performance by up to 41%, highlighting critical safety gaps in current AI deployment.

🧠 Llama
AIBearisharXiv – CS AI · Mar 127/10
🧠

The Dunning-Kruger Effect in Large Language Models: An Empirical Study of Confidence Calibration

A new study reveals that large language models exhibit patterns similar to the Dunning-Kruger effect, where poorly performing AI models show severe overconfidence in their abilities. The research tested four major models across 24,000 trials, finding that Kimi K2 displayed the worst calibration with 72.6% overconfidence despite only 23.3% accuracy, while Claude Haiku 4.5 achieved the best performance with proper confidence calibration.

🧠 Claude🧠 Haiku🧠 Gemini
AINeutralarXiv – CS AI · Mar 127/10
🧠

Evaluating Adjective-Noun Compositionality in LLMs: Functional vs Representational Perspectives

A research study reveals that large language models develop strong internal compositional representations for adjective-noun combinations, but struggle to consistently translate these representations into successful task performance. The findings highlight a significant gap between what LLMs understand internally and their functional capabilities.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Automated Concept Discovery for LLM-as-a-Judge Preference Analysis

Researchers developed automated methods to discover biases in Large Language Models when used as judges, analyzing over 27,000 paired responses. The study found LLMs exhibit systematic biases including preference for refusing sensitive requests more than humans, favoring concrete and empathetic responses, and showing bias against certain legal guidance.

AIBearisharXiv – CS AI · Mar 57/10
🧠

When Shallow Wins: Silent Failures and the Depth-Accuracy Paradox in Latent Reasoning

Research reveals that state-of-the-art AI mathematical reasoning models like Qwen2.5-Math-7B achieve 61% accuracy primarily through unreliable computational pathways, with only 18.4% using stable reasoning. The study exposes that 81.6% of correct predictions come from inconsistent methods and 8.8% are confident but incorrect outputs.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Spectrum Tuning: Post-Training for Distributional Coverage and In-Context Steerability

Researchers introduce Spectrum Tuning, a new post-training method that improves AI language models' ability to generate diverse outputs and follow in-context steering instructions. The technique addresses limitations in current post-training approaches that reduce models' distributional coverage and flexibility when tasks require multiple valid answers rather than single correct responses.

Page 1 of 4Next →