AIBearisharXiv – CS AI · 10h ago7/10
🧠
The Geometry of Forgetting: Temporal Knowledge Drift as an Independent Axis in LLM Representations
Researchers demonstrate that large language models encode temporal knowledge drift—whether facts have become outdated since training—as a geometrically orthogonal direction in their internal representations, separate from correctness and uncertainty signals. This structural property explains why existing detection methods fail and why LLMs confidently produce outdated information, with implications for AI reliability and deployment.