AINeutralarXiv โ CS AI ยท 6h ago2
๐ง
Transformers Remember First, Forget Last: Dual-Process Interference in LLMs
Research analyzing 39 large language models reveals they exhibit proactive interference (remembering early information over recent) unlike humans who typically show retroactive interference. The study found this pattern is universal across all tested LLMs, with larger models showing better resistance to retroactive interference but unchanged proactive interference patterns.