10 articles tagged with #fact-checking. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBearisharXiv – CS AI · Mar 177/10
🧠Researchers developed DECEIVE-AFC, an adversarial attack framework that can significantly compromise AI-based fact-checking systems by manipulating claims to disrupt evidence retrieval and reasoning. The attacks reduced fact-checking accuracy from 78.7% to 53.7% in testing, highlighting major vulnerabilities in LLM-based verification systems.
AINeutralarXiv – CS AI · 3d ago6/10
🧠Researchers introduce MERMAID, a memory-enhanced multi-agent framework for automated fact-checking that couples evidence retrieval with reasoning processes. The system achieves state-of-the-art performance on multiple benchmarks by reusing retrieved evidence across claims, reducing redundant searches and improving verification efficiency.
AINeutralarXiv – CS AI · Apr 106/10
🧠Researchers propose G-Defense, a graph-enhanced framework that uses large language models and retrieval-augmented generation to detect fake news while providing explainable, fine-grained reasoning. The system decomposes news claims into sub-claims, retrieves competing evidence, and generates transparent explanations without requiring verified fact-checking databases.
AINeutralarXiv – CS AI · Apr 76/10
🧠Researchers introduce FactReview, an AI system that improves academic peer review by combining claim extraction, literature positioning, and code execution to verify research claims. The system addresses weaknesses in current LLM-based reviewing by grounding assessments in external evidence rather than relying solely on manuscript narratives.
$MKR
AIBullisharXiv – CS AI · Apr 76/10
🧠Researchers have developed SHARP, a new AI agent that significantly improves knowledge graph verification by combining internal structural data with external evidence. The system achieved 4.2% and 12.9% accuracy improvements over existing methods on major datasets, offering better interpretability for complex fact verification tasks.
AIBullisharXiv – CS AI · Mar 176/10
🧠Researchers propose a new framework for large language models that separates planning from factual retrieval to improve reliability in fact-seeking question answering. The modular approach uses a lightweight student planner trained via teacher-student learning to generate structured reasoning steps, showing improved accuracy and speed on challenging benchmarks.
AINeutralarXiv – CS AI · Mar 176/10
🧠Researchers released MALINT, the first human-annotated English dataset for detecting disinformation and its malicious intent, developed with expert fact-checkers. The study benchmarked 12 language models and introduced intent-based inoculation techniques that improved zero-shot disinformation detection across six datasets, five LLMs, and seven languages.
🧠 Llama
AINeutralThe Verge – AI · Mar 36/104
🧠Following recent military strikes on Iran, floods of fake images and videos have appeared online, including AI-generated content and footage from video games like War Thunder. Reputable news organizations like The New York Times, Indicator, and Bellingcat use extensive verification procedures to combat the spread of synthetic and misleading content during major news events.
AIBearisharXiv – CS AI · Feb 276/105
🧠Researchers analyzed factual accuracy of Chinese web information systems, comparing traditional search engines, standalone LLMs, and AI overviews using 12,161 real-world queries. The study found substantial differences in factual accuracy across systems and estimated potential misinformation exposure for Chinese users.
AINeutralarXiv – CS AI · Mar 35/106
🧠Researchers propose WKGFC, a new AI system that uses knowledge graphs and multi-agent retrieval to improve fact-checking accuracy. The system addresses limitations of current methods that rely on textual similarity by implementing an automated Markov Decision Process with LLM agents to retrieve and verify evidence from multiple sources.