AINeutralarXiv โ CS AI ยท 1d ago4/10
๐ง
From Oracle to Noisy Context: Mitigating Contextual Exposure Bias in Speech-LLMs
Researchers developed a new training framework to address contextual exposure bias in Speech-LLMs, where models trained on perfect conversation history perform poorly with error-prone real-world context. Their approach combines teacher error knowledge, context dropout, and direct preference optimization to improve robustness, achieving WER reductions from 5.59% to 5.17% on TED-LIUM 3.