y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#gender-bias News & Analysis

5 articles tagged with #gender-bias. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

5 articles
AIBearisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Widespread Gender and Pronoun Bias in Moral Judgments Across LLMs

A comprehensive study of six major LLM families reveals systematic biases in moral judgments based on gender pronouns and grammatical markers. The research found that AI models consistently favor non-binary subjects while penalizing male subjects in fairness assessments, raising concerns about embedded biases in AI ethical decision-making.

๐Ÿข Meta๐Ÿง  Grok
AINeutralarXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

A Systematic Analysis of Biases in Large Language Models

A comprehensive study analyzed four major large language models (LLMs) across political, ideological, alliance, language, and gender dimensions, revealing persistent biases despite efforts to make them neutral. The research used various experimental methods including news summarization, stance classification, UN voting patterns, multilingual tasks, and survey responses to uncover these systematic biases.

AINeutralarXiv โ€“ CS AI ยท Apr 136/10
๐Ÿง 

Mitigating Extrinsic Gender Bias for Bangla Classification Tasks

Researchers have developed RandSymKL, a debiasing technique for Bangla language models that mitigates gender bias in classification tasks like sentiment analysis and hate speech detection. The study introduces four manually annotated benchmark datasets with gender-perturbation testing and demonstrates that the approach effectively reduces bias while maintaining competitive accuracy compared to existing methods.

AINeutralarXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

Do LLMs have a Gender (Entropy) Bias?

Researchers discovered that large language models exhibit gender bias at the individual question level, creating different amounts of information for men versus women despite appearing unbiased at category levels. A new benchmark dataset called RealWorldQuestioning was developed, and a simple prompt-based debiasing approach was shown to improve response quality in 78% of cases.

๐Ÿข Hugging Face๐Ÿง  ChatGPT
AINeutralarXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Gender Fairness in Audio Deepfake Detection: Performance and Disparity Analysis

Researchers analyzed gender bias in audio deepfake detection systems using fairness metrics beyond standard performance measures. The study found significant gender disparities in error distribution that conventional metrics like Equal Error Rate failed to detect, highlighting the need for fairness-aware evaluation in AI voice authentication systems.