y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-evaluation News & Analysis

143 articles tagged with #ai-evaluation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

143 articles
AINeutralarXiv – CS AI · Apr 76/10
🧠

Automatically Generating Hard Math Problems from Hypothesis-Driven Error Analysis

Researchers have developed a new automated pipeline that generates challenging math problems by first identifying specific mathematical concepts where LLMs struggle, then creating targeted problems to test these weaknesses. The method successfully reduced a leading LLM's accuracy from 77% to 45%, demonstrating its effectiveness at creating more rigorous benchmarks.

🧠 Llama
AINeutralarXiv – CS AI · Apr 76/10
🧠

Graphic-Design-Bench: A Comprehensive Benchmark for Evaluating AI on Graphic Design Tasks

Researchers introduce GraphicDesignBench (GDB), the first comprehensive benchmark suite for evaluating AI models on professional graphic design tasks including layout, typography, and animation. Testing reveals current AI models struggle with spatial reasoning, vector code generation, and typographic precision despite showing promise in high-level semantic understanding.

AINeutralarXiv – CS AI · Apr 76/10
🧠

ClawArena: Benchmarking AI Agents in Evolving Information Environments

Researchers introduce ClawArena, a new benchmark for evaluating AI agents' ability to maintain accurate beliefs in evolving information environments with conflicting sources. The benchmark tests 64 scenarios across 8 professional domains, revealing significant performance gaps between different AI models and frameworks in handling dynamic belief revision and multi-source reasoning.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Multilingual Prompt Localization for Agent-as-a-Judge: Language and Backbone Sensitivity in Requirement-Level Evaluation

A research study reveals that AI model performance rankings change dramatically based on the evaluation language used, with GPT-4o performing best in English while Gemini leads in Arabic and Hindi. The study tested 55 development tasks across five languages and six AI models, showing no single model dominates across all languages.

🧠 GPT-4🧠 Gemini
AIBearisharXiv – CS AI · Apr 76/10
🧠

Individual and Combined Effects of English as a Second Language and Typos on LLM Performance

Research reveals that Large Language Models (LLMs) experience greater performance degradation when facing English as a Second Language (ESL) inputs combined with typographical errors, compared to either factor alone. The study tested eight ESL variants with three levels of typos, finding that evaluations on clean English may overestimate real-world model performance.

AINeutralarXiv – CS AI · Apr 76/10
🧠

LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection

Researchers have developed LiveFact, a new dynamic benchmark for evaluating Large Language Models' ability to detect fake news and misinformation in real-time conditions. The benchmark addresses limitations of static testing by using temporal evidence sets and finds that open-source models like Qwen3-235B-A22B now match proprietary systems in performance.

AINeutralarXiv – CS AI · Apr 66/10
🧠

StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs

Researchers introduce StructEval, a comprehensive benchmark for evaluating Large Language Models' ability to generate structured outputs across 18 formats including JSON, HTML, and React. Even state-of-the-art models like o1-mini only achieve 75.58% average scores, with open-source models performing approximately 10 points lower.

AINeutralarXiv – CS AI · Mar 276/10
🧠

RubricEval: A Rubric-Level Meta-Evaluation Benchmark for LLM Judges in Instruction Following

Researchers introduce RubricEval, the first rubric-level meta-evaluation benchmark for assessing how well AI judges evaluate instruction-following in large language models. Even advanced models like GPT-4o achieve only 55.97% accuracy on the challenging subset, highlighting significant gaps in AI evaluation reliability.

🧠 GPT-4
AINeutralarXiv – CS AI · Mar 266/10
🧠

DepthCharge: A Domain-Agnostic Framework for Measuring Depth-Dependent Knowledge in Large Language Models

Researchers developed DepthCharge, a new framework for measuring how deeply large language models can maintain accurate responses when questioned about domain-specific knowledge. Testing across four domains revealed significant variation in model performance depth, with no single AI model dominating all areas and expensive models not always achieving superior results.

AINeutralarXiv – CS AI · Mar 176/10
🧠

AgentProcessBench: Diagnosing Step-Level Process Quality in Tool-Using Agents

Researchers introduce AgentProcessBench, the first benchmark for evaluating step-level effectiveness in AI tool-using agents, comprising 1,000 trajectories and 8,509 human-labeled annotations. The benchmark reveals that current AI models struggle with distinguishing neutral and erroneous actions in tool execution, and that process-level signals can significantly enhance test-time performance.

AINeutralarXiv – CS AI · Mar 176/10
🧠

VTC-Bench: Evaluating Agentic Multimodal Models via Compositional Visual Tool Chaining

Researchers introduce VTC-Bench, a comprehensive benchmark for evaluating multimodal AI models' ability to use visual tools for complex tasks. The benchmark reveals significant limitations in current models, with leading model Gemini-3.0-Pro achieving only 51% accuracy on multi-tool visual reasoning tasks.

🧠 Gemini
AIBearisharXiv – CS AI · Mar 176/10
🧠

Do Metrics for Counterfactual Explanations Align with User Perception?

A new study reveals that standard algorithmic metrics used to evaluate AI counterfactual explanations poorly correlate with human perceptions of explanation quality. The research found weak and dataset-dependent relationships between technical metrics and user judgments, highlighting fundamental limitations in current AI explainability evaluation methods.

AINeutralarXiv – CS AI · Mar 176/10
🧠

NetArena: Dynamic Benchmarks for AI Agents in Network Automation

NetArena introduces a dynamic benchmarking framework for evaluating AI agents in network automation tasks, addressing limitations of static benchmarks through runtime query generation and network emulator integration. The framework reveals that AI agents achieve only 13-38% performance on realistic network queries, significantly improving statistical reliability by reducing confidence-interval overlap from 85% to 0%.

AIBullisharXiv – CS AI · Mar 176/10
🧠

EvolvR: Self-Evolving Pairwise Reasoning for Story Evaluation to Enhance Generation

Researchers have developed EvolvR, a self-evolving framework that improves AI's ability to evaluate and generate stories through pairwise reasoning and multi-agent data filtering. The system achieves state-of-the-art performance on three evaluation benchmarks and significantly enhances story generation quality when used as a reward model.

AINeutralarXiv – CS AI · Mar 166/10
🧠

LLM BiasScope: A Real-Time Bias Analysis Platform for Comparative LLM Evaluation

Researchers have launched LLM BiasScope, an open-source web application that enables real-time bias analysis and side-by-side comparison of outputs from major language models including Google Gemini, DeepSeek, and Meta Llama. The platform uses a two-stage bias detection pipeline and provides interactive visualizations to help researchers and practitioners evaluate bias patterns across different AI models.

🏢 Hugging Face🧠 Gemini🧠 Llama
AINeutralarXiv – CS AI · Mar 166/10
🧠

Literary Narrative as Moral Probe : A Cross-System Framework for Evaluating AI Ethical Reasoning and Refusal Behavior

Researchers developed a new method to evaluate AI ethical reasoning using literary narratives from science fiction, testing 13 AI systems across 24 conditions. The study found that current AI systems perform surface-level ethical responses rather than genuine moral reasoning, with more sophisticated systems showing more complex failure modes.

🏢 Anthropic🏢 Microsoft🧠 Claude
AIBullisharXiv – CS AI · Mar 166/10
🧠

Developing the PsyCogMetrics AI Lab to Evaluate Large Language Models and Advance Cognitive Science -- A Three-Cycle Action Design Science Study

Researchers have developed PsyCogMetrics AI Lab, a cloud-based platform that applies psychometric and cognitive science methodologies to evaluate Large Language Models. The platform was created through a three-cycle Action Design Science study and aims to advance AI evaluation methods at the intersection of psychology, cognitive science, and artificial intelligence.

AINeutralFortune Crypto · Mar 147/10
🧠

We need a new Turing test — and Moltbook just proved it

Moltbook, an AI platform, has demonstrated capabilities that suggest current AI evaluation methods like the Turing test may be inadequate. The platform's feed contained content that appeared to showcase advanced AI reasoning beyond typical chatbot interactions.

We need a new Turing test — and Moltbook just proved it
AINeutralarXiv – CS AI · Mar 126/10
🧠

The System Hallucination Scale (SHS): A Minimal yet Effective Human-Centered Instrument for Evaluating Hallucination-Related Behavior in Large Language Models

Researchers have developed the System Hallucination Scale (SHS), a human-centered tool for evaluating hallucination behavior in large language models. The instrument showed strong statistical validity in testing with 210 participants and provides a practical method for assessing AI model reliability from a user perspective.

AINeutralarXiv – CS AI · Mar 126/10
🧠

SpreadsheetArena: Decomposing Preference in LLM Generation of Spreadsheet Workbooks

Researchers introduce SpreadsheetArena, a platform for evaluating large language models' ability to generate spreadsheet workbooks from natural language prompts. The study reveals that preferred spreadsheet features vary significantly across use cases, and even top-performing models struggle with domain-specific best practices in areas like finance.

AINeutralarXiv – CS AI · Mar 126/10
🧠

ADVERSA: Measuring Multi-Turn Guardrail Degradation and Judge Reliability in Large Language Models

Researchers developed ADVERSA, an automated red-teaming framework that measures how AI guardrails degrade over multiple conversation turns rather than single-prompt attacks. Testing on three frontier models revealed a 26.7% jailbreak rate, with successful attacks concentrated in early rounds rather than accumulating through sustained pressure.

🧠 GPT-5🧠 Claude🧠 Opus
AINeutralarXiv – CS AI · Mar 116/10
🧠

Time, Identity and Consciousness in Language Model Agents

Researchers introduce a new framework using Stack Theory to evaluate machine consciousness in AI language models by distinguishing between agents that can talk about having a stable identity versus those actually organized with persistent self-structure. The methodology uses temporal scaffolding and persistence scores to assess whether AI agents demonstrate genuine identity continuity or merely simulate it through language.

← PrevPage 3 of 6Next →