←Back to feed
🧠 AI⚪ NeutralImportance 6/10
Mitigating Translationese Bias in Multilingual LLM-as-a-Judge via Disentangled Information Bottleneck
arXiv – CS AI|Hongbin Zhang, Kehai Chen, Xuefen Bai, Youcheng Pan, Yang Xiang, Jinpeng Wang, Min Zhang|
🤖AI Summary
Researchers introduce DIBJudge, a new framework to address systematic bias in large language models that favor machine-translated text over human-authored content in multilingual evaluations. The solution uses variational information compression to isolate bias factors and improve LLM judgment accuracy across languages.
Key Takeaways
- →Large language models exhibit translationese bias, systematically favoring machine-translated text over human-authored references in multilingual evaluations.
- →The bias is particularly pronounced in low-resource languages and stems from spurious correlations with English alignment and cross-lingual predictability.
- →DIBJudge framework uses variational information compression to learn judgment-critical representations while isolating bias factors.
- →The approach incorporates cross-covariance penalty to suppress statistical dependence between robust and bias representations.
- →Extensive evaluations show DIBJudge consistently outperforms existing baselines and substantially reduces translationese bias.
#llm#multilingual#bias-mitigation#machine-translation#ai-evaluation#research#language-models#information-bottleneck
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Related Articles