y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#hallucinations News & Analysis

12 articles tagged with #hallucinations. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

12 articles
AINeutralarXiv – CS AI Β· 6d ago7/10
🧠

Distorted or Fabricated? A Survey on Hallucination in Video LLMs

Researchers have conducted a comprehensive survey on hallucinations in Video Large Language Models (Vid-LLMs), identifying two core typesβ€”dynamic distortion and content fabricationβ€”and their root causes in temporal representation limitations and insufficient visual grounding. The study reviews evaluation benchmarks, mitigation strategies, and proposes future directions including motion-aware encoders and counterfactual learning to improve reliability.

AINeutralarXiv – CS AI Β· Apr 147/10
🧠

Cognitive Pivot Points and Visual Anchoring: Unveiling and Rectifying Hallucinations in Multimodal Reasoning Models

Researchers identify a critical failure mode in multimodal AI reasoning models called Reasoning Vision Truth Disconnect (RVTD), where hallucinations occur at high-entropy decision points when models abandon visual grounding. They propose V-STAR, a training framework using hierarchical visual attention rewards and forced reflection mechanisms to anchor reasoning back to visual evidence and reduce hallucinations in long-chain tasks.

AIBearisharXiv – CS AI Β· Apr 137/10
🧠

From Dispersion to Attraction: Spectral Dynamics of Hallucination Across Whisper Model Scales

Researchers propose the Spectral Sensitivity Theorem to explain hallucinations in large ASR models like Whisper, identifying a phase transition between dispersive and attractor regimes. Analysis of model eigenspectra reveals that intermediate models experience structural breakdown while large models compress information, decoupling from acoustic evidence and increasing hallucination risk.

AINeutralarXiv – CS AI Β· Apr 77/10
🧠

When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression

Researchers at arXiv have identified two key mechanisms behind reasoning hallucinations in large language models: Path Reuse and Path Compression. The study models next-token prediction as graph search, showing how memorized knowledge can override contextual constraints and how frequently used reasoning paths become shortcuts that lead to unsupported conclusions.

AINeutralarXiv – CS AI Β· Mar 177/10
🧠

Distributional Semantics Tracing: A Framework for Explaining Hallucinations in Large Language Models

Researchers introduce Distributional Semantics Tracing (DST), a new framework for explaining hallucinations in large language models by tracking how semantic representations drift across neural network layers. The method reveals that hallucinations occur when models are pulled toward contextually inconsistent concepts based on training correlations rather than actual prompt context.

AIBearisharXiv – CS AI Β· Mar 127/10
🧠

Quantifying Hallucinations in Language Language Models on Medical Textbooks

Research study finds that LLaMA-70B-Instruct hallucinated in 19.7% of medical Q&A responses despite high plausibility scores, highlighting significant reliability issues in AI healthcare applications. The study shows that lower hallucination rates correlate with higher usefulness scores, emphasizing the need for better safeguards in medical AI systems.

AIBullishCrypto Briefing Β· Mar 37/102
🧠

OpenAI releases GPT-5.3 Instant with fewer refusals and improved web answers

OpenAI has released GPT-5.3 Instant for ChatGPT, featuring reduced refusals, enhanced web-based answers, and fewer hallucinations across major performance benchmarks. This update represents a significant improvement in AI model reliability and user experience.

OpenAI releases GPT-5.3 Instant with fewer refusals and improved web answers
AINeutralApple Machine Learning Β· Apr 136/10
🧠

Cram Less to Fit More: Training Data Pruning Improves Memorization of Facts

Researchers present a data pruning technique that improves how large language models memorize factual knowledge by optimizing training data distribution. The work, grounded in information-theoretic analysis, addresses the gap between theoretical model capacity and actual factual accuracy, offering practical methods to reduce hallucinations in knowledge-intensive tasks.

AINeutralarXiv – CS AI Β· Apr 106/10
🧠

Steering the Verifiability of Multimodal AI Hallucinations

Researchers have developed a method to control how verifiable AI hallucinations are in multimodal language models by distinguishing between obvious hallucinations (easily detected by humans) and elusive ones (harder to spot). Using a dataset of 4,470 human responses, they created targeted interventions that can fine-tune which types of hallucinations occur, enabling flexible control suited to different security and usability requirements.

AIBullisharXiv – CS AI Β· Mar 266/10
🧠

Mitigating LLM Hallucinations through Domain-Grounded Tiered Retrieval

Researchers propose a new four-phase architecture to reduce AI hallucinations using domain-specific retrieval and verification systems. The framework achieved win rates up to 83.7% across multiple benchmarks, demonstrating significant improvements in factual accuracy for large language models.

AIBullishGoogle DeepMind Blog Β· Dec 176/103
🧠

FACTS Grounding: A new benchmark for evaluating the factuality of large language models

Researchers have introduced FACTS Grounding, a new benchmark designed to evaluate how accurately large language models ground their responses in source material and avoid hallucinations. The benchmark includes a comprehensive evaluation system and online leaderboard to measure LLM factuality performance.