AINeutralarXiv โ CS AI ยท 10h ago6/10
๐ง
Mitigating Extrinsic Gender Bias for Bangla Classification Tasks
Researchers have developed RandSymKL, a debiasing technique for Bangla language models that mitigates gender bias in classification tasks like sentiment analysis and hate speech detection. The study introduces four manually annotated benchmark datasets with gender-perturbation testing and demonstrates that the approach effectively reduces bias while maintaining competitive accuracy compared to existing methods.