y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#fairness News & Analysis

22 articles tagged with #fairness. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

22 articles
AIBearisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Widespread Gender and Pronoun Bias in Moral Judgments Across LLMs

A comprehensive study of six major LLM families reveals systematic biases in moral judgments based on gender pronouns and grammatical markers. The research found that AI models consistently favor non-binary subjects while penalizing male subjects in fairness assessments, raising concerns about embedded biases in AI ethical decision-making.

๐Ÿข Meta๐Ÿง  Grok
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Justitia: Fair and Efficient Scheduling of Task-parallel LLM Agents with Selective Pampering

Justitia is a new scheduling system for task-parallel LLM agents that optimizes GPU server performance through selective resource allocation based on completion order prediction. The system uses memory-centric cost quantification and virtual-time fair queuing to achieve both efficiency and fairness in LLM serving environments.

๐Ÿข Meta
AINeutralarXiv โ€“ CS AI ยท Mar 127/10
๐Ÿง 

Assessing Cognitive Biases in LLMs for Judicial Decision Support: Virtuous Victim and Halo Effects

Research examining five major LLMs found they exhibit human-like cognitive biases when evaluating judicial scenarios, showing stronger virtuous victim effects but reduced credential-based halo effects compared to humans. The study suggests LLMs may offer modest improvements over human decision-making in judicial contexts, though variability across models limits current practical application.

๐Ÿง  ChatGPT๐Ÿง  Claude๐Ÿง  Sonnet
AINeutralarXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

A Systematic Analysis of Biases in Large Language Models

A comprehensive study analyzed four major large language models (LLMs) across political, ideological, alliance, language, and gender dimensions, revealing persistent biases despite efforts to make them neutral. The research used various experimental methods including news summarization, stance classification, UN voting patterns, multilingual tasks, and survey responses to uncover these systematic biases.

AINeutralarXiv โ€“ CS AI ยท 3d ago6/10
๐Ÿง 

Mitigating Extrinsic Gender Bias for Bangla Classification Tasks

Researchers have developed RandSymKL, a debiasing technique for Bangla language models that mitigates gender bias in classification tasks like sentiment analysis and hate speech detection. The study introduces four manually annotated benchmark datasets with gender-perturbation testing and demonstrates that the approach effectively reduces bias while maintaining competitive accuracy compared to existing methods.

AIBullisharXiv โ€“ CS AI ยท Apr 76/10
๐Ÿง 

APPA: Adaptive Preference Pluralistic Alignment for Fair Federated RLHF of LLMs

Researchers propose APPA, a new framework for aligning large language models with diverse human preferences in federated learning environments. The method dynamically reweights group-level rewards to improve fairness, achieving up to 28% better alignment for underperforming groups while maintaining overall model performance.

๐Ÿข Meta๐Ÿง  Llama
AIBearisharXiv โ€“ CS AI ยท Mar 266/10
๐Ÿง 

Who Benefits from RAG? The Role of Exposure, Utility and Attribution Bias

Research reveals that Retrieval-Augmented Generation (RAG) systems exhibit fairness issues, with queries from certain demographic groups systematically receiving higher accuracy than others. The study identifies three key factors affecting fairness: group exposure in retrieved documents, utility of group-specific documents, and attribution bias in how generators use different group documents.

๐Ÿข Meta
AINeutralarXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

MESD: Detecting and Mitigating Procedural Bias in Intersectional Groups

Researchers propose MESD (Multi-category Explanation Stability Disparity), a new metric to detect procedural bias in AI models across intersectional groups. They also introduce UEF framework that balances utility, explanation quality, and fairness in machine learning systems.

AINeutralarXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Evaluation of Audio Language Models for Fairness, Safety, and Security

Researchers introduce a structural taxonomy and unified evaluation framework for Audio Large Language Models (ALLMs) to assess fairness, safety, and security. The study reveals systematic differences in how ALLMs handle audio versus text inputs, with FSS behavior closely tied to acoustic information integration methods.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Ethical Fairness without Demographics in Human-Centered AI

Researchers introduce Flare, a new AI fairness framework that ensures ethical outcomes without requiring demographic data, addressing privacy and regulatory concerns in human-centered AI applications. The system uses Fisher Information to detect hidden biases and includes a novel evaluation metric suite called BHE for measuring ethical fairness beyond traditional statistical measures.

๐Ÿข Meta
AINeutralarXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models

Researchers propose integrating causal methods into machine learning systems to balance competing objectives like fairness, privacy, robustness, accuracy, and explainability. The paper argues that addressing these principles in isolation leads to conflicts and suboptimal solutions, while causal approaches can help navigate trade-offs in both trustworthy ML and foundation models.

AINeutralarXiv โ€“ CS AI ยท Mar 26/1019
๐Ÿง 

BRIDGE the Gap: Mitigating Bias Amplification in Automated Scoring of English Language Learners via Inter-group Data Augmentation

Researchers developed BRIDGE, a framework to reduce bias in AI-powered automated scoring systems that unfairly penalize English Language Learners (ELLs). The system addresses representation bias by generating synthetic high-scoring ELL samples, achieving fairness improvements comparable to using additional human data while maintaining overall performance.

AINeutralarXiv โ€“ CS AI ยท Mar 26/1017
๐Ÿง 

When Does Multimodal Learning Help in Healthcare? A Benchmark on EHR and Chest X-Ray Fusion

Researchers conducted a systematic benchmark study on multimodal fusion between Electronic Health Records (EHR) and chest X-rays for clinical decision support, revealing when and how combining data modalities improves healthcare AI performance. The study found that multimodal fusion helps when data is complete but benefits degrade under realistic missing data scenarios, and released an open-source benchmarking toolkit for reproducible evaluation.

AINeutralarXiv โ€“ CS AI ยท Mar 27/1019
๐Ÿง 

Biases in the Blind Spot: Detecting What LLMs Fail to Mention

Researchers have developed an automated pipeline to detect hidden biases in Large Language Models that don't appear in their reasoning explanations. The system discovered previously unknown biases like Spanish fluency and writing formality across seven LLMs in hiring, loan approval, and university admission tasks.

AINeutralarXiv โ€“ CS AI ยท Mar 54/10
๐Ÿง 

Fairness Begins with State: Purifying Latent Preferences for Hierarchical Reinforcement Learning in Interactive Recommendation

Researchers propose DSRM-HRL, a new framework that uses diffusion models to purify user preference data and hierarchical reinforcement learning to balance recommendation accuracy with fairness. The system addresses bias in interactive recommendation systems by separating state estimation from decision-making, achieving better outcomes on both utility and exposure equity.

AINeutralarXiv โ€“ CS AI ยท Mar 44/103
๐Ÿง 

Revealing Positive and Negative Role Models to Help People Make Good Decisions

Researchers present a framework for social planners to strategically reveal positive and negative role models to influence agent behavior in social networks. The study addresses optimization challenges when disclosure budgets are limited and proposes algorithms to maximize social welfare while maintaining fairness across different groups.

AINeutralarXiv โ€“ CS AI ยท Mar 44/103
๐Ÿง 

Proactive Guiding Strategy for Item-side Fairness in Interactive Recommendation

Researchers propose HRL4PFG, a new interactive recommendation framework using hierarchical reinforcement learning to promote fairness by guiding user preferences toward long-tail items. The approach aims to balance item-side fairness with user satisfaction, showing improved performance in cumulative interaction rewards and user engagement length compared to existing methods.

AINeutralarXiv โ€“ CS AI ยท Mar 24/105
๐Ÿง 

Fairness-in-the-Workflow: How Machine Learning Practitioners at Big Tech Companies Approach Fairness in Recommender Systems

Researchers conducted interviews with 11 practitioners at major tech companies to study how fairness considerations are integrated into recommender system workflows. The study identified key challenges including defining fairness in RS contexts, balancing stakeholder interests, and facilitating cross-team communication between technical, legal, and fairness teams.