AIBullisharXiv β CS AI Β· 14h ago7/10
π§
TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Researchers introduce TARAC, a training-free framework that mitigates hallucinations in Large Vision-Language Models by dynamically preserving visual attention across generation steps. The method achieves significant improvementsβreducing hallucinated content by 25.2% and boosting perception scores by 10.65βwhile adding only ~4% computational overhead, making it practical for real-world deployment.