y0news
← Feed
Back to feed
🧠 AI NeutralImportance 5/10

Extrinsic Hallucinations in LLMs

Lil'Log (Lilian Weng)|
🤖AI Summary

This article defines and categorizes hallucination in large language models, specifically focusing on extrinsic hallucination where model outputs are not grounded in world knowledge. The author distinguishes between in-context hallucination (inconsistent with provided context) and extrinsic hallucination (not verifiable by external knowledge), emphasizing that LLMs must be factual and acknowledge uncertainty to avoid fabricating information.

Key Takeaways
  • Hallucination in LLMs refers specifically to unfaithful, fabricated content that lacks proper grounding in context or world knowledge.
  • In-context hallucination occurs when model output contradicts the provided source content.
  • Extrinsic hallucination happens when outputs cannot be verified against world knowledge or the pre-training dataset.
  • Preventing extrinsic hallucination requires LLMs to be factual and acknowledge when they don't know an answer.
  • The distinction helps narrow down hallucination from general model mistakes to specific cases of fabricated information.
Read Original →via Lil'Log (Lilian Weng)
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles