34 articles tagged with #misinformation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBearishWired – AI · Mar 10🔥 8/10
🧠X's AI chatbot Grok is failing to properly verify video content from the Iran conflict and is generating its own AI-created images about the war. This highlights significant issues with AI content verification systems during major geopolitical events.
🧠 Grok
AINeutralarXiv – CS AI · Apr 77/10
🧠Researchers developed a new AI-generated video detection framework using a large-scale dataset of 140K videos from 15 generators and the Qwen2.5-VL Vision Transformer. The method operates at native resolution to preserve high-frequency forgery artifacts typically lost in preprocessing, achieving superior performance in detecting synthetic media.
CryptoBearishDaily Hodl · Mar 267/10
⛓️Blockchain investigator ZachXBT has exposed a coordinated network of X (Twitter) accounts that create fake or exaggerated geopolitical news about Middle East conflicts to manufacture panic and promote cryptocurrency pump-and-dump schemes. The network exploits crisis situations to manipulate investors into fraudulent crypto investments.
$BTC
AIBearisharXiv – CS AI · Mar 177/10
🧠Researchers developed DECEIVE-AFC, an adversarial attack framework that can significantly compromise AI-based fact-checking systems by manipulating claims to disrupt evidence retrieval and reasoning. The attacks reduced fact-checking accuracy from 78.7% to 53.7% in testing, highlighting major vulnerabilities in LLM-based verification systems.
AINeutralarXiv – CS AI · Mar 56/10
🧠Researchers introduce BeliefSim, a framework that uses Large Language Models to simulate how different demographic groups are susceptible to misinformation based on their underlying beliefs. The system achieves up to 92% accuracy in predicting misinformation susceptibility by incorporating psychology-informed belief profiles.
AINeutralarXiv – CS AI · Mar 47/102
🧠Researchers propose Credibility Governance (CG), a new mechanism that improves collective decision-making on online platforms by dynamically scoring agent and opinion credibility based on alignment with emerging evidence. Testing in simulated environments shows CG outperforms traditional voting and stake-weighted systems, offering better resistance to misinformation and manipulation.
AINeutralarXiv – CS AI · Mar 37/102
🧠Researchers developed a new algorithm called Learn-to-Distance (L2D) that can detect AI-generated text from models like GPT, Claude, and Gemini with significantly improved accuracy. The method uses adaptive distance learning between original and rewritten text, achieving 54.3% to 75.4% relative improvements over existing detection methods across extensive testing.
AINeutralarXiv – CS AI · Mar 37/103
🧠Researchers have identified and studied the 'Mandela effect' in AI multi-agent systems, where groups of AI agents collectively develop false memories or misremember information. The study introduces MANBENCH, a benchmark to evaluate this phenomenon, and proposes mitigation strategies that achieved a 74.40% reduction in false collective memories.
AIBearishOpenAI News · Aug 167/102
🧠Social media platforms banned accounts linked to an Iranian influence operation that used ChatGPT to generate content targeting the U.S. presidential campaign and other topics. The operation reportedly did not reach a significant audience.
AINeutralarXiv – CS AI · 2d ago6/10
🧠TRUST Agents is a multi-agent AI framework designed to improve fake news detection and fact verification by combining claim extraction, evidence retrieval, verification, and explainable reasoning. Unlike binary classification approaches, the system generates transparent, human-inspectable reports with logic-aware reasoning for complex claims, though it shows that retrieval quality and uncertainty calibration remain significant challenges in automated fact verification.
AINeutralarXiv – CS AI · 3d ago6/10
🧠Researchers propose a steganography-based attribution framework that embeds cryptographic identifiers into AI-generated images to combat harmful misuse on social platforms. The system combines watermarking techniques with CLIP-based multimodal detection to achieve 0.99 AUC-ROC performance, enabling reliable forensic tracing of synthetic media used in misinformation campaigns.
AIBearishThe Register – AI · 3d ago6/10
🧠A recent survey reveals public concern that AI technologies will negatively impact elections through misinformation and deepfakes, while also damaging personal relationships. The findings highlight growing societal anxiety about AI's role in information integrity and social cohesion.
AIBearisharXiv – CS AI · 4d ago6/10
🧠Researchers found that large language models fail to accurately simulate human susceptibility to misinformation, consistently overstating how attitudes drive belief and sharing while ignoring social network effects. The study reveals systematic biases in how LLMs represent misinformation concepts, suggesting they are better tools for identifying where AI diverges from human judgment rather than replacing human survey responses.
AINeutralarXiv – CS AI · Apr 106/10
🧠Researchers propose G-Defense, a graph-enhanced framework that uses large language models and retrieval-augmented generation to detect fake news while providing explainable, fine-grained reasoning. The system decomposes news claims into sub-claims, retrieves competing evidence, and generates transparent explanations without requiring verified fact-checking databases.
AINeutralarXiv – CS AI · Apr 76/10
🧠A research study using JudgeGPT platform found that humans cannot reliably distinguish between AI-generated and human-written news articles across 2,318 judgments from 1,054 participants. The study tested six different LLMs and concluded that user-side detection is not viable, suggesting the need for cryptographic content provenance systems.
AINeutralarXiv – CS AI · Apr 76/10
🧠Researchers have developed LiveFact, a new dynamic benchmark for evaluating Large Language Models' ability to detect fake news and misinformation in real-time conditions. The benchmark addresses limitations of static testing by using temporal evidence sets and finds that open-source models like Qwen3-235B-A22B now match proprietary systems in performance.
AIBearishThe Register – AI · Mar 266/10
🧠A British lawmaker who was targeted by AI deepfake technology has been unable to obtain satisfactory responses from major US technology companies regarding the incident. The case highlights growing concerns about accountability and transparency from Big Tech firms when dealing with AI-generated misinformation and impersonation.
CryptoBearishCoinDesk · Mar 256/10
⛓️Ryan Kirkley analyzes how crypto prediction markets, while designed to forecast outcomes, can actually influence and reshape power structures. The article highlights risks of market manipulation and the potential for these platforms to amplify misinformation at scale.
AIBearishDecrypt – AI · Mar 166/10
🧠A viral story claiming ChatGPT helped cure a dog's cancer by designing a custom vaccine has been disputed by the actual scientists involved. The researchers say the AI's role was minimal and the credit for the breakthrough belongs to traditional scientific methods and expertise.
🧠 ChatGPT
AINeutralarXiv – CS AI · Mar 116/10
🧠Researchers developed a method using Large Language Models to create personalized fake news debunking messages tailored to individuals' Big Five personality traits. The study found that personalized debunking messages are more persuasive than generic ones, with traits like Openness increasing persuadability while Neuroticism decreases it.
AINeutralTechCrunch – AI · Mar 106/10
🧠YouTube is expanding its AI deepfake detection tool to politicians, journalists, and government officials, allowing them to flag and request removal of unauthorized AI-generated content featuring their likeness. This represents a significant step in content moderation as AI-generated media becomes more sophisticated and widespread.
AIBearishThe Verge – AI · Mar 106/10
🧠Meta's Oversight Board criticized the company's deepfake detection methods as inadequate for combating AI-generated misinformation during conflicts. The board is calling for Meta to overhaul how it identifies and labels AI-generated content across Facebook, Instagram, and Threads following an investigation into a fake AI video about alleged damage in Israel.
AINeutralCoinTelegraph · Mar 45/102
🧠X (formerly Twitter) has implemented a 90-day revenue-sharing ban for creators who post AI-generated war footage without proper disclosure. This policy aims to address the spread of undisclosed synthetic content depicting warfare on the platform.
AINeutralThe Verge – AI · Mar 36/104
🧠Following recent military strikes on Iran, floods of fake images and videos have appeared online, including AI-generated content and footage from video games like War Thunder. Reputable news organizations like The New York Times, Indicator, and Bellingcat use extensive verification procedures to combat the spread of synthetic and misleading content during major news events.
AIBearisharXiv – CS AI · Mar 37/108
🧠Researchers introduced the Synthetic Web Benchmark, revealing that frontier AI language models fail catastrophically when exposed to high-plausibility misinformation in search results. The study shows current AI agents struggle to handle conflicting information sources, with accuracy collapsing despite access to truthful content.