AINeutralarXiv โ CS AI ยท 14h ago6/10
๐ง
Exploring Knowledge Conflicts for Faithful LLM Reasoning: Benchmark and Method
Researchers introduce ConflictQA, a benchmark revealing that large language models struggle with conflicting information across different knowledge sources (text vs. knowledge graphs) in retrieval-augmented generation systems. The study proposes XoT, an explanation-based framework to improve faithful reasoning when LLMs encounter contradictory evidence.