y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-limitations News & Analysis

52 articles tagged with #ai-limitations. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

52 articles
AIBearisharXiv – CS AI · Apr 66/10
🧠

From Abstract to Contextual: What LLMs Still Cannot Do in Mathematics

A new study reveals that large language models, despite excelling at benchmark math problems, struggle significantly with contextual mathematical reasoning where problems are embedded in real-world scenarios. The research shows performance drops of 13-34 points for open-source models and 13-20 points for proprietary models when abstract math problems are presented in contextual settings.

AIBearishTechCrunch – AI · Apr 56/10
🧠

Copilot is ‘for entertainment purposes only,’ according to Microsoft’s terms of use

Microsoft's terms of service classify Copilot as being 'for entertainment purposes only,' indicating that even AI companies themselves warn users against blindly trusting AI model outputs. This aligns with broader industry cautions about AI reliability and the need for human oversight when using AI tools.

🏢 Microsoft
AIBearisharXiv – CS AI · Mar 276/10
🧠

Probing the Lack of Stable Internal Beliefs in LLMs

Research reveals that large language models (LLMs) struggle to maintain consistent internal beliefs or goals across multi-turn conversations, failing to preserve implicit consistency when not explicitly provided context. This limitation poses significant challenges for developing persona-driven AI systems that require stable personality traits and behavioral patterns.

AIBearisharXiv – CS AI · Mar 266/10
🧠

Visuospatial Perspective Taking in Multimodal Language Models

Research reveals that multimodal language models have significant deficits in visuospatial perspective-taking, particularly in Level 2 VPT which requires adopting another person's viewpoint. The study used two human psychology tasks to evaluate MLMs' ability to understand and reason from alternative spatial perspectives.

AIBearisharXiv – CS AI · Mar 176/10
🧠

BrainBench: Exposing the Commonsense Reasoning Gap in Large Language Models

Researchers introduced BrainBench, a new benchmark revealing significant gaps in commonsense reasoning among leading LLMs. Even the best model (Claude Opus 4.6) achieved only 80.3% accuracy on 100 brainteaser questions, while GPT-4o scored just 39.7%, exposing fundamental reasoning deficits across frontier AI models.

🧠 GPT-4🧠 Claude🧠 Opus
AIBearisharXiv – CS AI · Mar 96/10
🧠

On the Reliability of AI Methods in Drug Discovery: Evaluation of Boltz-2 for Structure and Binding Affinity Prediction

A comprehensive evaluation of Boltz-2, an AI-based drug discovery tool, reveals significant limitations in predicting protein-ligand binding structures and affinities. The study found only weak correlations with physics-based methods and concluded that while useful for initial screening, Boltz-2 lacks the precision required for reliable drug lead identification.

AIBearisharXiv – CS AI · Mar 96/10
🧠

Discerning What Matters: A Multi-Dimensional Assessment of Moral Competence in LLMs

Researchers developed a new framework to assess moral competence in large language models, finding that current evaluations may overestimate AI moral reasoning capabilities. While LLMs outperformed humans on standard ethical scenarios, they performed significantly worse when required to identify morally relevant information from noisy data.

AINeutralarXiv – CS AI · Mar 96/10
🧠

KramaBench: A Benchmark for AI Systems on Data-to-Insight Pipelines over Data Lakes

Researchers introduce KramaBench, a comprehensive benchmark testing AI systems' ability to execute end-to-end data processing pipelines on real-world data lakes. The study reveals significant limitations in current AI systems, with the best performing system achieving only 55% accuracy in full data-lake scenarios and leading LLMs implementing just 20% of individual data tasks correctly.

AIBearisharXiv – CS AI · Mar 36/106
🧠

Stochastic Parrots or Singing in Harmony? Testing Five Leading LLMs for their Ability to Replicate a Human Survey with Synthetic Data

Researchers compared human survey responses from 420 Silicon Valley developers with synthetic data from five leading LLMs including ChatGPT, Claude, and Gemini. While AI models produced technically plausible results, they failed to capture counterintuitive insights and only replicated conventional wisdom rather than revealing novel findings.

AINeutralarXiv – CS AI · Mar 36/108
🧠

Exploring the AI Obedience: Why is Generating a Pure Color Image Harder than CyberPunk?

Researchers have identified a 'Paradox of Simplicity' in AI models where they excel at complex tasks but fail at simple ones like generating pure color images. A new benchmark called VIOLIN has been introduced to evaluate AI obedience and alignment with instructions across different complexity levels.

$RNDR
AIBearisharXiv – CS AI · Mar 37/108
🧠

Are LLMs Reliable Code Reviewers? Systematic Overcorrection in Requirement Conformance Judgement

Research reveals that Large Language Models (LLMs) systematically fail at code review tasks, frequently misclassifying correct code as defective when matching implementations to natural language requirements. The study found that more detailed prompts actually increase misjudgment rates, raising concerns about LLM reliability in automated development workflows.

AIBearisharXiv – CS AI · Mar 36/106
🧠

LangGap: Diagnosing and Closing the Language Gap in Vision-Language-Action Models

Researchers reveal that state-of-the-art Vision-Language-Action (VLA) models largely ignore language instructions despite achieving 95% success on standard benchmarks. The new LangGap benchmark exposes significant language understanding deficits, with targeted data augmentation only partially addressing the fundamental challenge of diverse instruction comprehension.

AIBearisharXiv – CS AI · Mar 36/106
🧠

Knowledge without Wisdom: Measuring Misalignment between LLMs and Intended Impact

Research reveals that leading foundation models (LLMs) perform poorly on real-world educational tasks despite excelling on AI benchmarks. The study found that 50% of misalignment errors are shared across models due to common pretraining approaches, with model ensembles actually worsening performance on learning outcomes.

AIBearisharXiv – CS AI · Mar 37/109
🧠

Defensive Refusal Bias: How Safety Alignment Fails Cyber Defenders

A study reveals that safety-aligned large language models exhibit "Defensive Refusal Bias," refusing legitimate cybersecurity defense tasks 2.72x more often when they contain security-sensitive keywords. The research found particularly high refusal rates for critical defensive operations like system hardening (43.8%) and malware analysis (34.3%), suggesting current AI safety measures rely on semantic similarity rather than understanding intent.

AIBearisharXiv – CS AI · Mar 36/104
🧠

Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles

Researchers introduced SciTrek, a new benchmark for testing large language models' ability to perform numerical reasoning across long scientific documents. The benchmark reveals significant challenges for current LLMs, with the best model achieving only 46.5% accuracy at 128K tokens, and performance declining as context length increases.

$COMP
AIBearisharXiv – CS AI · Mar 36/104
🧠

HardcoreLogic: Challenging Large Reasoning Models with Long-tail Logic Puzzle Games

Researchers introduced HardcoreLogic, a benchmark of over 5,000 logic puzzles across 10 games to test Large Reasoning Models (LRMs) on non-standard puzzle variants. The study reveals significant performance drops in current LRMs when faced with complex or uncommon puzzle variations, indicating heavy reliance on memorized patterns rather than genuine logical reasoning.

AINeutralarXiv – CS AI · Mar 27/1017
🧠

Human Supervision as an Information Bottleneck: A Unified Theory of Error Floors in Human-Guided Learning

Researchers propose a unified theory explaining why AI models trained on human feedback exhibit persistent error floors that cannot be eliminated through scaling alone. The study demonstrates that human supervision acts as an information bottleneck due to annotation noise, subjective preferences, and language limitations, requiring auxiliary non-human signals to overcome these structural limitations.

AIBearisharXiv – CS AI · Mar 26/1017
🧠

CMT-Benchmark: A Benchmark for Condensed Matter Theory Built by Expert Researchers

Researchers created CMT-Benchmark, a new dataset of 50 expert-level condensed matter theory problems to evaluate large language models' capabilities in advanced scientific research. The best performing model (GPT5) solved only 30% of problems, with the average across 17 models being just 11.4%, highlighting significant gaps in current AI's physical reasoning abilities.

AIBearisharXiv – CS AI · Mar 26/1018
🧠

FRIEDA: Benchmarking Multi-Step Cartographic Reasoning in Vision-Language Models

Researchers introduce FRIEDA, a new benchmark for testing cartographic reasoning in large vision-language models, revealing significant limitations. The best AI models achieve only 37-38% accuracy compared to 84.87% human performance on complex map interpretation tasks requiring multi-step spatial reasoning.

AIBearisharXiv – CS AI · Feb 276/106
🧠

ConstraintBench: Benchmarking LLM Constraint Reasoning on Direct Optimization

Researchers introduced ConstraintBench, a new benchmark testing whether large language models can directly solve constrained optimization problems without external solvers. The study found that even the best frontier models only achieve 65% constraint satisfaction, with feasibility being a bigger challenge than optimality.

AINeutralarXiv – CS AI · Feb 276/106
🧠

The AI Research Assistant: Promise, Peril, and a Proof of Concept

Researchers published a case study demonstrating successful human-AI collaboration in mathematical research, extending Hermite quadrature rule results beyond manual capabilities. The study reveals AI's strengths in algebraic manipulation and proof exploration, while highlighting the critical need for human verification and domain expertise in every step of the research process.

AINeutralarXiv – CS AI · Mar 25/107
🧠

User Misconceptions of LLM-Based Conversational Programming Assistants

Researchers analyzed user misconceptions about LLM-based programming assistants like ChatGPT, finding users often have misplaced expectations about web access, code execution, and debugging capabilities. The study examined Python programming conversations from WildChat dataset and identified the need for clearer communication of tool capabilities to prevent over-reliance and unproductive practices.

← PrevPage 2 of 3Next →