AIBearisharXiv โ CS AI ยท 9h ago7/10
๐ง
Large Language Models Reproduce Racial Stereotypes When Used for Text Annotation
A comprehensive study of 19 large language models reveals systematic racial bias in automated text annotation, with over 4 million judgments showing LLMs consistently reproduce harmful stereotypes based on names and dialect. The research demonstrates that AI models rate texts with Black-associated names as more aggressive and those written in African American Vernacular English as less professional and more toxic.