30 articles tagged with #ai-bias. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBearisharXiv โ CS AI ยท 2d ago7/10
๐ง Researchers present Edu-MMBias, a comprehensive framework for detecting social biases in Vision-Language Models used in educational settings. The study reveals that VLMs exhibit compensatory class bias while harboring persistent health and racial stereotypes, and critically, that visual inputs bypass text-based safety mechanisms to trigger hidden biases.
AIBearisharXiv โ CS AI ยท 2d ago7/10
๐ง Researchers have conducted a comprehensive study examining how large vision-language models (LVLMs) exhibit cultural stereotypes and biases when making judgments about people's moral, ethical, and political values based on cultural context cues in images. Using counterfactual image sets and Moral Foundations Theory, the analysis across five popular LVLMs reveals significant concerns about AI fairness beyond traditional social biases, with implications for deployed AI systems used globally.
AIBearisharXiv โ CS AI ยท 2d ago7/10
๐ง Researchers evaluated four omnimodal AI models across text, image, audio, and video processing, finding substantial demographic and linguistic biases particularly in audio understanding tasks. The study reveals significant accuracy disparities across age, gender, language, and skin tone, with audio tasks showing prediction collapse toward narrow categories, highlighting fairness concerns as these models see wider real-world deployment.
AINeutralarXiv โ CS AI ยท Apr 67/10
๐ง Researchers developed Debiasing-DPO, a new training method that reduces harmful biases in large language models by 84% while improving accuracy by 52%. The study found that LLMs can shift predictions by up to 1.48 points when exposed to irrelevant contextual information like demographics, highlighting critical risks for high-stakes AI applications.
๐ง Llama
AIBearisharXiv โ CS AI ยท Mar 177/10
๐ง A comprehensive study of six major LLM families reveals systematic biases in moral judgments based on gender pronouns and grammatical markers. The research found that AI models consistently favor non-binary subjects while penalizing male subjects in fairness assessments, raising concerns about embedded biases in AI ethical decision-making.
๐ข Meta๐ง Grok
AIBearisharXiv โ CS AI ยท Mar 177/10
๐ง A comprehensive study of 19 large language models reveals systematic racial bias in automated text annotation, with over 4 million judgments showing LLMs consistently reproduce harmful stereotypes based on names and dialect. The research demonstrates that AI models rate texts with Black-associated names as more aggressive and those written in African American Vernacular English as less professional and more toxic.
AINeutralarXiv โ CS AI ยท Mar 56/10
๐ง Researchers developed automated methods to discover biases in Large Language Models when used as judges, analyzing over 27,000 paired responses. The study found LLMs exhibit systematic biases including preference for refusing sensitive requests more than humans, favoring concrete and empathetic responses, and showing bias against certain legal guidance.
AIBearisharXiv โ CS AI ยท Mar 56/10
๐ง A research study tested 11 AI tools on their ability to classify the cognitive demand of mathematical tasks, finding they achieved only 63% accuracy on average with no tool exceeding 83%. The tools showed systematic bias toward middle-category classifications and struggled with reasoning about underlying cognitive processes versus surface textual features.
๐ข Perplexity๐ง ChatGPT๐ง Claude
AINeutralarXiv โ CS AI ยท Mar 56/10
๐ง Researchers have identified Order-to-Space Bias (OTS) in modern image generation models, where the order entities are mentioned in text prompts incorrectly determines spatial layout and role assignments. The study introduces OTS-Bench to measure this bias and demonstrates that targeted fine-tuning and early-stage interventions can reduce the problem while maintaining generation quality.
AIBearishMIT News โ AI ยท Feb 197/104
๐ง MIT research reveals that leading AI chatbots deliver less accurate information to vulnerable user groups, including those with lower English proficiency, less formal education, and non-US backgrounds. The study highlights concerning disparities in AI performance that could exacerbate existing inequalities in access to reliable information.
AINeutralarXiv โ CS AI ยท 1d ago6/10
๐ง Researchers propose Opinion-Aware Retrieval-Augmented Generation (RAG) to address a critical bias in current LLM systems that treat subjective content as noise rather than valuable information. By formalizing the distinction between factual queries (epistemic uncertainty) and opinion queries (aleatoric uncertainty), the team develops an architecture that preserves diverse perspectives in knowledge retrieval, demonstrating 26.8% improved sentiment diversity and 42.7% better entity matching on real-world e-commerce data.
AINeutralarXiv โ CS AI ยท 3d ago6/10
๐ง Researchers benchmarked five frontier LLMs against human players in Cards Against Humanity games, finding that while models exceed random baseline performance, their humor preferences align poorly with humans but strongly with each other. The findings suggest LLM humor judgment may reflect systematic biases and structural artifacts rather than genuine preference understanding.
AIBearisharXiv โ CS AI ยท 3d ago6/10
๐ง Researchers conducted a large-scale computational analysis comparing 17,790 articles from Grokipedia, Elon Musk's AI-generated encyclopedia, against Wikipedia. The study found that Grokipedia articles are longer but contain fewer citations, with some entries showing systematic rightward political bias in media sources, particularly in history, religion, and arts sections.
๐ข xAI๐ง Grok
AINeutralarXiv โ CS AI ยท Mar 276/10
๐ง A benchmarking study reveals demographic bias in multimodal large language models used for face verification, testing nine models across different ethnicity and gender groups. The research found that face-specialized models outperform general-purpose MLLMs, but accuracy doesn't correlate with fairness, and bias patterns differ from traditional face recognition systems.
๐ข Meta
AIBearishArs Technica โ AI ยท Mar 266/10
๐ง A study found that AI tools exhibiting sycophantic behavior can negatively impact human decision-making. Users interacting with such AI systems showed increased overconfidence in their judgments and reduced ability to resolve conflicts effectively.
AINeutralarXiv โ CS AI ยท Mar 266/10
๐ง Researchers developed PoliticsBench, a new framework to evaluate political bias in large language models through multi-turn roleplay scenarios. The study found that 7 out of 8 major LLMs (Claude, Deepseek, Gemini, GPT, Llama, Qwen) showed left-leaning political bias, while only Grok exhibited right-leaning tendencies.
๐ง Claude๐ง Gemini๐ง Llama
AINeutralarXiv โ CS AI ยท Mar 176/10
๐ง Researchers propose MESD (Multi-category Explanation Stability Disparity), a new metric to detect procedural bias in AI models across intersectional groups. They also introduce UEF framework that balances utility, explanation quality, and fairness in machine learning systems.
AIBearisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduced MDial, the first large-scale framework for generating multi-dialectal conversational data across nine English dialects, revealing that over 80% of English speakers don't use Standard American English. Evaluation of 17 LLMs showed even frontier models achieve under 70% accuracy in dialect identification, with particularly poor performance on non-American dialects.
AINeutralarXiv โ CS AI ยท Mar 166/10
๐ง A research study comparing causal reasoning abilities of 20+ large language models against human baselines found that LLMs exhibit more rule-like reasoning strategies than humans, who account for unmentioned factors. While LLMs don't mirror typical human cognitive biases in causal judgment, their rigid reasoning may fail when uncertainty is intrinsic, suggesting they can complement human decision-making in specific contexts.
AIBearisharXiv โ CS AI ยท Mar 116/10
๐ง A new research study reveals that Large Language Models (LLMs) propagate gender stereotypes and biases when processing healthcare data, particularly through interactions between gender and social determinants of health. The research used French patient records to demonstrate how LLMs rely on embedded stereotypes to make gendered decisions in healthcare contexts.
AINeutralarXiv โ CS AI ยท Mar 116/10
๐ง Researchers developed an LLM-agent framework to model how media influences US-China attitudes from 2005-2025, testing three debiasing mechanisms to reduce AI model prejudices. The study found that devil's advocate agents were most effective at producing human-like opinion formation, while revealing geographic biases tied to AI models' origins.
๐ง GPT-4
AIBearisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers tested the stability of moral judgments in large language models using nearly 3,000 ethical dilemmas, finding that narrative framing and evaluation methods significantly influence AI decisions. The study reveals that LLM moral reasoning is highly dependent on how questions are presented rather than underlying moral substance, with only 35.7% consistency across different evaluation protocols.
๐ง GPT-4๐ง Claude
AINeutralarXiv โ CS AI ยท Mar 36/108
๐ง Researchers introduce IRIS Benchmark, the first comprehensive evaluation framework for measuring fairness in Unified Multimodal Large Language Models (UMLLMs) across both understanding and generation tasks. The benchmark integrates 60 granular metrics across three dimensions and reveals systemic bias issues in leading AI models, including 'generation gaps' and 'personality splits'.
AINeutralarXiv โ CS AI ยท Mar 36/103
๐ง Researchers introduced OVERTONBENCH, a framework for measuring viewpoint diversity in large language models through the OVERTONSCORE metric. In a study of 8 LLMs with 1,208 participants, models scored 0.35-0.41 out of 1.0, with DeepSeek V3 performing best, showing significant room for improvement in pluralistic representation.
AINeutralarXiv โ CS AI ยท Mar 26/1019
๐ง Researchers developed BRIDGE, a framework to reduce bias in AI-powered automated scoring systems that unfairly penalize English Language Learners (ELLs). The system addresses representation bias by generating synthetic high-scoring ELL samples, achieving fairness improvements comparable to using additional human data while maintaining overall performance.