y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#hallucination-mitigation News & Analysis

14 articles tagged with #hallucination-mitigation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

14 articles
AIBullisharXiv โ€“ CS AI ยท 4d ago7/10
๐Ÿง 

Variational Visual Question Answering for Uncertainty-Aware Selective Prediction

Researchers demonstrate that variational Bayesian methods significantly improve Vision Language Models' reliability for Visual Question Answering tasks by enabling selective prediction with reduced hallucinations and overconfidence. The proposed Variational VQA approach shows particular strength at low error tolerances and offers a practical path to making large multimodal models safer without proportional computational costs.

AIBullisharXiv โ€“ CS AI ยท Apr 107/10
๐Ÿง 

Faithful-First Reasoning, Planning, and Acting for Multimodal LLMs

Researchers propose Faithful-First RPA, a framework that improves multimodal AI reasoning by prioritizing faithfulness to visual evidence. The method uses FaithEvi for supervision and FaithAct for execution, achieving up to 24% improvement in perceptual faithfulness without sacrificing task accuracy.

AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

OSCAR: Orchestrated Self-verification and Cross-path Refinement

Researchers introduce OSCAR, a training-free framework that reduces AI hallucinations in diffusion language models by using cross-chain entropy to detect uncertain token positions during generation. The system runs parallel denoising chains and performs targeted remasking with retrieved evidence to improve factual accuracy without requiring external hallucination classifiers.

AIBullisharXiv โ€“ CS AI ยท Mar 127/10
๐Ÿง 

Adaptive Activation Cancellation for Hallucination Mitigation in Large Language Models

Researchers developed Adaptive Activation Cancellation (AAC), a real-time framework that reduces hallucinations in large language models by identifying and suppressing problematic neural activations during inference. The method requires no fine-tuning or external knowledge and preserves model capabilities while improving factual accuracy across multiple model scales including LLaMA 3-8B.

๐Ÿข Perplexity
AIBullisharXiv โ€“ CS AI ยท Mar 46/103
๐Ÿง 

Through the Lens of Contrast: Self-Improving Visual Reasoning in VLMs

Researchers introduce VC-STaR, a new framework that improves visual reasoning in vision-language models by using contrastive image pairs to reduce hallucinations. The approach creates VisCoR-55K, a new dataset that outperforms existing visual reasoning methods when used for model fine-tuning.

AIBullisharXiv โ€“ CS AI ยท Mar 46/103
๐Ÿง 

Self-Aug: Query and Entropy Adaptive Decoding for Large Vision-Language Models

Researchers developed a new training-free decoding strategy for Large Vision-Language Models that reduces hallucinations by using query-adaptive visual augmentation and entropy-based token selection. The method showed significant improvements in factual consistency across four LVLMs and seven benchmarks compared to existing approaches.

AINeutralarXiv โ€“ CS AI ยท 5d ago6/10
๐Ÿง 

Noise-Aware In-Context Learning for Hallucination Mitigation in ALLMs

Researchers propose Noise-Aware In-Context Learning (NAICL), a plug-and-play method to reduce hallucinations in auditory large language models without expensive fine-tuning. The approach uses a noise prior library to guide models toward more conservative outputs, achieving a 37% reduction in hallucination rates while establishing a new benchmark for evaluating audio understanding systems.

AIBullisharXiv โ€“ CS AI ยท 5d ago6/10
๐Ÿง 

Constraining Sequential Model Editing with Editing Anchor Compression

Researchers propose Editing Anchor Compression (EAC), a framework that addresses degradation of large language models' general abilities during sequential knowledge editing. By constraining parameter matrix deviations through selective anchor compression, EAC preserves over 70% of model performance while maintaining edited knowledge, advancing the practical viability of model editing as an alternative to expensive retraining.

AIBullisharXiv โ€“ CS AI ยท Apr 76/10
๐Ÿง 

Focus Matters: Phase-Aware Suppression for Hallucination in Vision-Language Models

Researchers developed a new method to reduce hallucinations in Large Vision-Language Models (LVLMs) by identifying a three-phase attention structure in vision processing and selectively suppressing low-attention tokens during the focus phase. The training-free approach significantly reduces object hallucinations while maintaining caption quality with minimal inference latency impact.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Thinking in Uncertainty: Mitigating Hallucinations in MLRMs with Latent Entropy-Aware Decoding

Researchers propose Latent Entropy-Aware Decoding (LEAD), a new method to reduce hallucinations in multimodal large reasoning models by switching between continuous and discrete token embeddings based on entropy states. The technique addresses issues where transition words correlate with high-entropy states that lead to unreliable outputs in visual question answering tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

HiPP-Prune: Hierarchical Preference-Conditioned Structured Pruning for Vision-Language Models

Researchers introduce HiPP-Prune, a new framework for efficiently compressing vision-language models while maintaining performance and reducing hallucinations. The hierarchical approach uses preference-based pruning that considers multiple objectives including task utility, visual grounding, and compression efficiency.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

SpotAgent: Grounding Visual Geo-localization in Large Vision-Language Models through Agentic Reasoning

Researchers introduce SpotAgent, a new framework that improves AI geo-localization by combining visual interpretation with external tool verification through agentic reasoning. The system addresses limitations of current Large Vision-Language Models that often make confident but ungrounded predictions when visual cues are sparse or ambiguous.