AIBullisharXiv โ CS AI ยท 7h ago7/10
๐ง
Learning Uncertainty from Sequential Internal Dispersion in Large Language Models
Researchers introduce Sequential Internal Variance Representation (SIVR), a novel supervised framework for detecting hallucinations in large language models by analyzing token-wise and layer-wise variance patterns in hidden states. The method demonstrates superior generalization compared to existing approaches while requiring smaller training datasets, potentially enabling practical deployment of hallucination detection systems.