y0news
AnalyticsDigestsSourcesRSSAICrypto
#factual-accuracy1 article
1 articles
AINeutralLil'Log (Lilian Weng) ยท Jul 75/10
๐Ÿง 

Extrinsic Hallucinations in LLMs

This article defines and categorizes hallucination in large language models, specifically focusing on extrinsic hallucination where model outputs are not grounded in world knowledge. The author distinguishes between in-context hallucination (inconsistent with provided context) and extrinsic hallucination (not verifiable by external knowledge), emphasizing that LLMs must be factual and acknowledge uncertainty to avoid fabricating information.