AINeutralarXiv – CS AI · 10h ago6/10
🧠
Probing Cross-modal Information Hubs in Audio-Visual LLMs
Researchers have analyzed how audio-visual large language models (AVLLMs) process cross-modal information, discovering that integrated audio-visual data concentrates in specialized 'sink tokens' rather than distributing uniformly. This finding enables a training-free method to reduce hallucinations by leveraging these cross-modal information hubs.