AINeutralarXiv – CS AI · 9h ago7/10
🧠
A Geometric Taxonomy of Hallucinations in LLMs
Researchers propose a geometric framework for detecting hallucinations in large language models by analyzing embedding space structure, categorizing three types of errors with different detectability profiles. The approach outperforms standard NLI baselines on expert-annotated datasets, providing interpretable diagnostics for production systems operating under black-box constraints.