AINeutralarXiv โ CS AI ยท 8h ago7/10
๐ง
The Phenomenology of Hallucinations
Researchers discovered that AI language models hallucinate not from failing to detect uncertainty, but from inability to integrate uncertainty signals into output generation. The study shows models can identify uncertain inputs internally, but these signals become geometrically amplified yet functionally silent due to weak coupling with output layers.