AIBearisharXiv – CS AI · 10h ago7/10
🧠
In-Context Fixation: When Demonstrated Labels Override Semantics in Few-Shot Classification
Researchers demonstrate that large language models suffer from 'in-context fixation,' where homogeneous demonstration labels—even semantically valid ones—cause classification accuracy to collapse below 12%. The models treat label-slot tokens as an exhaustive vocabulary set rather than learning from semantic meaning, revealing that in-context learning operates as constrained vocabulary retrieval rather than genuine concept learning.
🧠 Llama