AINeutralarXiv โ CS AI ยท 10h ago7/10
๐ง
Distributional Semantics Tracing: A Framework for Explaining Hallucinations in Large Language Models
Researchers introduce Distributional Semantics Tracing (DST), a new framework for explaining hallucinations in large language models by tracking how semantic representations drift across neural network layers. The method reveals that hallucinations occur when models are pulled toward contextually inconsistent concepts based on training correlations rather than actual prompt context.