y0news
#context-pollution1 article
1 articles
AINeutralarXiv โ€“ CS AI ยท 4h ago7
๐Ÿง 

Do LLMs Benefit From Their Own Words?

Research reveals that large language models don't significantly benefit from conditioning on their own previous responses in multi-turn conversations. The study found that omitting assistant history can reduce context lengths by up to 10x while maintaining response quality, and in some cases even improves performance by avoiding context pollution where models over-condition on previous responses.