AINeutralarXiv โ CS AI ยท 7h ago6/10
๐ง
LLM Reasoning Is Latent, Not the Chain of Thought
A new position paper challenges the prevailing assumption that large language models reason through explicit chain-of-thought outputs, arguing instead that reasoning occurs primarily in latent-state trajectories hidden within model computations. The research separates three confounded factors and proposes that current reasoning benchmarks and interpretability claims need fundamental reevaluation based on this distinction.