AINeutralarXiv โ CS AI ยท 3h ago7/10
๐ง
Social Bias in LLM-Generated Code: Benchmark and Mitigation
Researchers have identified severe social bias in code generated by large language models, with bias scores reaching 60.58% across four major models. They propose a Fairness Monitor Agent that reduces bias by 65.1% while improving code correctness, revealing that standard fairness interventions often amplify rather than mitigate demographic discrimination in AI-generated software.