y0news
← Feed
Back to feed
🧠 AI NeutralImportance 7/10

When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression

arXiv – CS AI|Xinnan Dai, Kai Yang, Cheng Luo, Shenglai Zeng, Kai Guo, Jiliang Tang|
🤖AI Summary

Researchers at arXiv have identified two key mechanisms behind reasoning hallucinations in large language models: Path Reuse and Path Compression. The study models next-token prediction as graph search, showing how memorized knowledge can override contextual constraints and how frequently used reasoning paths become shortcuts that lead to unsupported conclusions.

Key Takeaways
  • LLM reasoning hallucinations stem from two fundamental mechanisms: Path Reuse and Path Compression during training.
  • Path Reuse occurs when memorized knowledge overrides contextual constraints in early training phases.
  • Path Compression happens when multi-step reasoning paths collapse into shortcut edges during later training.
  • The research models next-token prediction as a graph search process with entities as nodes and transitions as edges.
  • These mechanisms provide a unified explanation for why LLMs produce fluent but factually incorrect reasoning.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles