70 articles tagged with #ai-reliability. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBearisharXiv – CS AI · 1d ago7/10
🧠Researchers have identified critical vulnerabilities in mobile GUI agents powered by large language models, revealing that third-party content in real-world apps causes these agents to fail significantly more often than benchmark tests suggest. Testing on 122 dynamic tasks and over 3,000 static scenarios shows misleading rates of 36-42%, raising serious concerns about deploying these agents in commercial settings.
AIBearisharXiv – CS AI · 1d ago7/10
🧠Researchers empirically evaluated 450 LLM-generated Python scripts for construction safety and found alarming reliability gaps, including a 45% silent failure rate where code executes but produces mathematically incorrect safety outputs. The study demonstrates that current frontier LLMs lack the deterministic rigor required for autonomous safety-critical engineering applications, necessitating human oversight and governance frameworks.
🧠 GPT-4🧠 Claude🧠 Gemini
AINeutralarXiv – CS AI · 1d ago7/10
🧠Researchers have conducted a comprehensive survey on hallucinations in Video Large Language Models (Vid-LLMs), identifying two core types—dynamic distortion and content fabrication—and their root causes in temporal representation limitations and insufficient visual grounding. The study reviews evaluation benchmarks, mitigation strategies, and proposes future directions including motion-aware encoders and counterfactual learning to improve reliability.
AIBearisharXiv – CS AI · 1d ago7/10
🧠Researchers demonstrate that instruction-tuned large language models suffer severe performance degradation when subject to simple lexical constraints like banning a single punctuation mark or common word, losing 14-48% of response quality. This fragility stems from a planning failure where models couple task competence to narrow surface-form templates, affecting both open-weight and commercially deployed closed-weight models like GPT-4o-mini.
🧠 GPT-4
AIBullisharXiv – CS AI · 1d ago7/10
🧠Researchers introduce Decoding by Perturbation (DeP), a training-free method that reduces hallucinations in multimodal large language models by applying controlled textual perturbations during decoding. The approach addresses the core issue where language priors override visual evidence, achieving improvements across multiple benchmarks without requiring model retraining or visual manipulation.
AIBearishFortune Crypto · 2d ago7/10
🧠Anthropic's Claude AI chatbot is experiencing significant performance degradation, with developers reporting it can no longer reliably handle complex engineering tasks. User backlash highlights concerns about AI system reliability and raises questions about the sustainability of rapid AI deployment without adequate quality control.
🏢 Anthropic🧠 Claude
AIBearisharXiv – CS AI · 2d ago7/10
🧠Researchers evaluated domain-specific fine-tuning of vision-language models (VLMs) on medical imaging tasks and found that performance degrades significantly with task complexity, with medical fine-tuning providing no consistent advantage. The study reveals that these models exhibit fragility and high sensitivity to prompt variations, questioning the reliability of VLMs for high-stakes medical applications.
🧠 GPT-5
AINeutralarXiv – CS AI · 2d ago7/10
🧠Researchers introduce METER, a benchmark that evaluates Large Language Models' ability to perform contextual causal reasoning across three hierarchical levels within unified settings. The study identifies critical failure modes in LLMs: susceptibility to causally irrelevant information and degraded context faithfulness at higher causal levels.
AIBullisharXiv – CS AI · 2d ago7/10
🧠Researchers introduce TARAC, a training-free framework that mitigates hallucinations in Large Vision-Language Models by dynamically preserving visual attention across generation steps. The method achieves significant improvements—reducing hallucinated content by 25.2% and boosting perception scores by 10.65—while adding only ~4% computational overhead, making it practical for real-world deployment.
AINeutralarXiv – CS AI · 2d ago7/10
🧠Researchers identify a critical failure mode in multimodal AI reasoning models called Reasoning Vision Truth Disconnect (RVTD), where hallucinations occur at high-entropy decision points when models abandon visual grounding. They propose V-STAR, a training framework using hierarchical visual attention rewards and forced reflection mechanisms to anchor reasoning back to visual evidence and reduce hallucinations in long-chain tasks.
AIBullisharXiv – CS AI · 2d ago7/10
🧠Researchers demonstrate that variational Bayesian methods significantly improve Vision Language Models' reliability for Visual Question Answering tasks by enabling selective prediction with reduced hallucinations and overconfidence. The proposed Variational VQA approach shows particular strength at low error tolerances and offers a practical path to making large multimodal models safer without proportional computational costs.
AIBearisharXiv – CS AI · 2d ago7/10
🧠Researchers introduce Grid2Matrix, a benchmark that reveals fundamental limitations in Vision-Language Models' ability to accurately process and describe visual details in grids. The study identifies a critical gap called 'Digital Agnosia'—where visual encoders preserve grid information that fails to translate into accurate language outputs—suggesting that VLM failures stem not from poor vision encoding but from the disconnection between visual features and linguistic expression.
AIBearisharXiv – CS AI · 2d ago7/10
🧠Researchers introduce VeriSim, an open-source framework that tests medical AI systems by injecting realistic patient communication barriers—such as memory gaps and health literacy limitations—into clinical simulations. Testing across seven LLMs reveals significant performance degradation (15-25% accuracy drop), with smaller models suffering 40% greater decline than larger ones, exposing a critical gap between standardized benchmarks and real-world clinical robustness.
AIBearisharXiv – CS AI · 3d ago7/10
🧠Researchers have identified and systematically studied correctness bugs in PyTorch's compiler (torch.compile) that silently produce incorrect outputs without crashing or warning users. A new testing technique called AlignGuard has detected 23 previously unknown bugs, with over 60% classified as high-priority by the PyTorch team, highlighting a critical reliability gap in a core tool for AI infrastructure optimization.
AINeutralarXiv – CS AI · 3d ago7/10
🧠Researchers present a framework to identify and mitigate identity bias in multi-agent debate systems where LLMs exchange reasoning. The study reveals that agents suffer from sycophancy (adopting peer views) and self-bias (ignoring peers), undermining debate reliability, and proposes response anonymization as a solution to force agents to evaluate arguments on merit rather than source identity.
AIBearisharXiv – CS AI · 6d ago7/10
🧠Researchers introduce the Graded Color Attribution dataset to test whether Vision-Language Models faithfully follow their own stated reasoning rules. The study reveals that VLMs systematically violate their introspective rules in up to 60% of cases, while humans remain consistent, suggesting VLM self-knowledge is fundamentally miscalibrated with serious implications for high-stakes deployment.
🧠 GPT-5
AIBullisharXiv – CS AI · 6d ago7/10
🧠DosimeTron, an agentic AI system powered by GPT-5.2, automates personalized Monte Carlo radiation dosimetry calculations for PET/CT medical imaging. Validated on 597 studies across 378 patients, the system achieved 99.6% correlation with reference dosimetry calculations while processing each case in approximately 32 minutes with zero execution failures.
🧠 GPT-5
AIBullisharXiv – CS AI · 6d ago7/10
🧠Researchers propose Faithful-First RPA, a framework that improves multimodal AI reasoning by prioritizing faithfulness to visual evidence. The method uses FaithEvi for supervision and FaithAct for execution, achieving up to 24% improvement in perceptual faithfulness without sacrificing task accuracy.
AINeutralarXiv – CS AI · Apr 77/10
🧠Researchers introduce 'error verifiability' as a new metric to measure whether AI-generated justifications help users distinguish correct from incorrect answers. The study found that common AI improvement methods don't enhance verifiability, but two new domain-specific approaches successfully improved users' ability to assess answer correctness.
AIBearisharXiv – CS AI · Apr 67/10
🧠A research paper examines reliability issues in AI-assisted medication decision systems, finding that even systems with good aggregate performance can produce dangerous errors in real-world healthcare scenarios. The study emphasizes that single incorrect AI recommendations in medication management can cause severe patient harm, highlighting the need for human oversight and risk-aware evaluation approaches.
AIBullisharXiv – CS AI · Mar 277/10
🧠Researchers propose a framework for verifying AI model properties at design time rather than after deployment, using algebraic constraints over finitely generated abelian groups. The approach eliminates computational overhead of post-hoc verification by building trustworthiness into the model architecture from the start.
AIBullisharXiv – CS AI · Mar 267/10
🧠Researchers developed SCoOP, a training-free framework that combines multiple Vision-Language Models to improve uncertainty quantification and reduce hallucinations in AI systems. The method achieves 10-13% better hallucination detection performance compared to existing approaches while adding only microsecond-level overhead to processing time.
AIBullisharXiv – CS AI · Mar 267/10
🧠Researchers developed Attention Imbalance Rectification (AIR), a method to reduce object hallucinations in Large Vision-Language Models by correcting imbalanced attention allocation between vision and language modalities. The technique achieves up to 35.1% reduction in hallucination rates while improving general AI capabilities by up to 15.9%.
AIBearisharXiv – CS AI · Mar 177/10
🧠Researchers introduce Brittlebench, a new evaluation framework that reveals frontier AI models experience up to 12% performance degradation when faced with minor prompt variations like typos or rephrasing. The study shows that semantics-preserving input perturbations can account for up to half of a model's performance variance, highlighting significant robustness issues in current language models.
AIBullisharXiv – CS AI · Mar 177/10
🧠Researchers introduce the Agent Lifecycle Toolkit (ALTK), an open-source middleware collection designed to address critical failure modes in enterprise AI agent deployments. The toolkit provides modular components for systematic error detection, repair, and mitigation across six key intervention points in the agent lifecycle.