y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#political-bias News & Analysis

4 articles tagged with #political-bias. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

4 articles
AINeutralarXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

A Systematic Analysis of Biases in Large Language Models

A comprehensive study analyzed four major large language models (LLMs) across political, ideological, alliance, language, and gender dimensions, revealing persistent biases despite efforts to make them neutral. The research used various experimental methods including news summarization, stance classification, UN voting patterns, multilingual tasks, and survey responses to uncover these systematic biases.

AIBearisharXiv โ€“ CS AI ยท Apr 66/10
๐Ÿง 

What Is The Political Content in LLMs' Pre- and Post-Training Data?

Research reveals that large language models exhibit political biases stemming from systematically left-leaning training data, with pre-training datasets containing more politically engaged content than post-training data. The study finds strong correlations between political stances in training data and model behavior, with biases persisting across all training stages.

AINeutralarXiv โ€“ CS AI ยท Mar 266/10
๐Ÿง 

PoliticsBench: Benchmarking Political Values in Large Language Models with Multi-Turn Roleplay

Researchers developed PoliticsBench, a new framework to evaluate political bias in large language models through multi-turn roleplay scenarios. The study found that 7 out of 8 major LLMs (Claude, Deepseek, Gemini, GPT, Llama, Qwen) showed left-leaning political bias, while only Grok exhibited right-leaning tendencies.

๐Ÿง  Claude๐Ÿง  Gemini๐Ÿง  Llama
AINeutralOpenAI News ยท Oct 96/107
๐Ÿง 

Defining and evaluating political bias in LLMs

OpenAI has developed new real-world testing methods to evaluate and reduce political bias in ChatGPT. These methods focus on improving objectivity in AI responses and establishing better bias measurement frameworks.