AINeutralarXiv – CS AI · 10h ago6/10
🧠
Belief or Circuitry? Causal Evidence for In-Context Graph Learning
Researchers present causal evidence that large language models learn in-context through dual mechanisms combining genuine structure inference with local pattern-matching, rather than relying on either approach alone. Using graph random-walk tasks and activation patching techniques, they demonstrate that LLMs simultaneously encode multiple competing graph topologies in orthogonal representational subspaces and show that late-layer circuits causally drive graph-preference predictions.