y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#misinformation News & Analysis

35 articles tagged with #misinformation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

35 articles
AIBearishWired – AI · Mar 10🔥 8/10
🧠

Fake AI Content About the Iran War Is All Over X

X's AI chatbot Grok is failing to properly verify video content from the Iran conflict and is generating its own AI-created images about the war. This highlights significant issues with AI content verification systems during major geopolitical events.

Fake AI Content About the Iran War Is All Over X
🧠 Grok
CryptoNeutralBlockonomi · 1h ago🔥 8/10
⛓️

Chinese Commentator Calls Bitcoin a CIA Trap as Iran’s Military Uses It to Bypass Sanctions

A Yale-educated Chinese commentator claims Bitcoin is CIA-controlled infrastructure, while reporting simultaneously reveals Iran's Islamic Revolutionary Guard Corps uses Bitcoin to collect millions in sanctions-evasion fees. The assertion contradicts Bitcoin's technical reality: a decentralized network operating across 22,174 nodes in 164 countries with no central control point.

$BTC
AINeutralarXiv – CS AI · Apr 77/10
🧠

Preserving Forgery Artifacts: AI-Generated Video Detection at Native Scale

Researchers developed a new AI-generated video detection framework using a large-scale dataset of 140K videos from 15 generators and the Qwen2.5-VL Vision Transformer. The method operates at native resolution to preserve high-frequency forgery artifacts typically lost in preprocessing, achieving superior performance in detecting synthetic media.

AIBearisharXiv – CS AI · Mar 177/10
🧠

DECEIVE-AFC: Adversarial Claim Attacks against Search-Enabled LLM-based Fact-Checking Systems

Researchers developed DECEIVE-AFC, an adversarial attack framework that can significantly compromise AI-based fact-checking systems by manipulating claims to disrupt evidence retrieval and reasoning. The attacks reduced fact-checking accuracy from 78.7% to 53.7% in testing, highlighting major vulnerabilities in LLM-based verification systems.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Belief-Sim: Towards Belief-Driven Simulation of Demographic Misinformation Susceptibility

Researchers introduce BeliefSim, a framework that uses Large Language Models to simulate how different demographic groups are susceptible to misinformation based on their underlying beliefs. The system achieves up to 92% accuracy in predicting misinformation susceptibility by incorporating psychology-informed belief profiles.

AINeutralarXiv – CS AI · Mar 47/102
🧠

Credibility Governance: A Social Mechanism for Collective Self-Correction under Weak Truth Signals

Researchers propose Credibility Governance (CG), a new mechanism that improves collective decision-making on online platforms by dynamically scoring agent and opinion credibility based on alignment with emerging evidence. Testing in simulated environments shows CG outperforms traditional voting and stake-weighted systems, offering better resistance to misinformation and manipulation.

AINeutralarXiv – CS AI · Mar 37/102
🧠

Learn-to-Distance: Distance Learning for Detecting LLM-Generated Text

Researchers developed a new algorithm called Learn-to-Distance (L2D) that can detect AI-generated text from models like GPT, Claude, and Gemini with significantly improved accuracy. The method uses adaptive distance learning between original and rewritten text, achieving 54.3% to 75.4% relative improvements over existing detection methods across extensive testing.

AIBearishOpenAI News · Aug 167/102
🧠

Disrupting a covert Iranian influence operation

Social media platforms banned accounts linked to an Iranian influence operation that used ChatGPT to generate content targeting the U.S. presidential campaign and other topics. The operation reportedly did not reach a significant audience.

AINeutralarXiv – CS AI · 2d ago6/10
🧠

TRUST Agents: A Collaborative Multi-Agent Framework for Fake News Detection, Explainable Verification, and Logic-Aware Claim Reasoning

TRUST Agents is a multi-agent AI framework designed to improve fake news detection and fact verification by combining claim extraction, evidence retrieval, verification, and explainable reasoning. Unlike binary classification approaches, the system generates transparent, human-inspectable reports with logic-aware reasoning for complex claims, though it shows that retrieval quality and uncertainty calibration remain significant challenges in automated fact verification.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

Toward Accountable AI-Generated Content on Social Platforms: Steganographic Attribution and Multimodal Harm Detection

Researchers propose a steganography-based attribution framework that embeds cryptographic identifiers into AI-generated images to combat harmful misuse on social platforms. The system combines watermarking techniques with CLIP-based multimodal detection to achieve 0.99 AUC-ROC performance, enabling reliable forensic tracing of synthetic media used in misinformation campaigns.

AIBearishThe Register – AI · 3d ago6/10
🧠

The votes are in: AI will hurt elections and relationships

A recent survey reveals public concern that AI technologies will negatively impact elections through misinformation and deepfakes, while also damaging personal relationships. The findings highlight growing societal anxiety about AI's role in information integrity and social cohesion.

AIBearisharXiv – CS AI · 4d ago6/10
🧠

Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility

Researchers found that large language models fail to accurately simulate human susceptibility to misinformation, consistently overstating how attitudes drive belief and sharing while ignoring social network effects. The study reveals systematic biases in how LLMs represent misinformation concepts, suggesting they are better tools for identifying where AI diverges from human judgment rather than replacing human survey responses.

AINeutralarXiv – CS AI · Apr 106/10
🧠

A Graph-Enhanced Defense Framework for Explainable Fake News Detection with LLM

Researchers propose G-Defense, a graph-enhanced framework that uses large language models and retrieval-augmented generation to detect fake news while providing explainable, fine-grained reasoning. The system decomposes news claims into sub-claims, retrieves competing evidence, and generates transparent explanations without requiring verified fact-checking databases.

AINeutralarXiv – CS AI · Apr 76/10
🧠

LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection

Researchers have developed LiveFact, a new dynamic benchmark for evaluating Large Language Models' ability to detect fake news and misinformation in real-time conditions. The benchmark addresses limitations of static testing by using temporal evidence sets and finds that open-source models like Qwen3-235B-A22B now match proprietary systems in performance.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Can Humans Tell? A Dual-Axis Study of Human Perception of LLM-Generated News

A research study using JudgeGPT platform found that humans cannot reliably distinguish between AI-generated and human-written news articles across 2,318 judgments from 1,054 participants. The study tested six different LLMs and concluded that user-side detection is not viable, suggesting the need for cryptographic content provenance systems.

AIBearishThe Register – AI · Mar 266/10
🧠

Brit lawmaker targeted by AI deepfake fails to get answers from US Big Tech

A British lawmaker who was targeted by AI deepfake technology has been unable to obtain satisfactory responses from major US technology companies regarding the incident. The case highlights growing concerns about accountability and transparency from Big Tech firms when dealing with AI-generated misinformation and impersonation.

AIBearishDecrypt – AI · Mar 166/10
🧠

Did ChatGPT Really Cure a Dog's Cancer? It's Complicated

A viral story claiming ChatGPT helped cure a dog's cancer by designing a custom vaccine has been disputed by the actual scientists involved. The researchers say the AI's role was minimal and the credit for the breakthrough belongs to traditional scientific methods and expertise.

Did ChatGPT Really Cure a Dog's Cancer? It's Complicated
🧠 ChatGPT
AINeutralarXiv – CS AI · Mar 116/10
🧠

Enhancing Debunking Effectiveness through LLM-based Personality Adaptation

Researchers developed a method using Large Language Models to create personalized fake news debunking messages tailored to individuals' Big Five personality traits. The study found that personalized debunking messages are more persuasive than generic ones, with traits like Openness increasing persuadability while Neuroticism decreases it.

AINeutralTechCrunch – AI · Mar 106/10
🧠

YouTube expands AI deepfake detection for politicians, government officials, and journalists

YouTube is expanding its AI deepfake detection tool to politicians, journalists, and government officials, allowing them to flag and request removal of unauthorized AI-generated content featuring their likeness. This represents a significant step in content moderation as AI-generated media becomes more sophisticated and widespread.

AIBearishThe Verge – AI · Mar 106/10
🧠

Meta’s deepfake moderation isn’t good enough, says Oversight Board

Meta's Oversight Board criticized the company's deepfake detection methods as inadequate for combating AI-generated misinformation during conflicts. The board is calling for Meta to overhaul how it identifies and labels AI-generated content across Facebook, Instagram, and Threads following an investigation into a fake AI video about alleged damage in Israel.

Meta’s deepfake moderation isn’t good enough, says Oversight Board
AINeutralCoinTelegraph · Mar 45/102
🧠

X introduces 90-day revenue-sharing ban for undisclosed AI war videos

X (formerly Twitter) has implemented a 90-day revenue-sharing ban for creators who post AI-generated war footage without proper disclosure. This policy aims to address the spread of undisclosed synthetic content depicting warfare on the platform.

X introduces 90-day revenue-sharing ban for undisclosed AI war videos
AINeutralThe Verge – AI · Mar 36/104
🧠

Here’s how journalists spot deepfakes

Following recent military strikes on Iran, floods of fake images and videos have appeared online, including AI-generated content and footage from video games like War Thunder. Reputable news organizations like The New York Times, Indicator, and Bellingcat use extensive verification procedures to combat the spread of synthetic and misleading content during major news events.

Here’s how journalists spot deepfakes
Page 1 of 2Next →