AINeutralarXiv โ CS AI ยท 7h ago7/10
๐ง
Why Fine-Tuning Encourages Hallucinations and How to Fix It
Researchers identify that supervised fine-tuning of large language models increases hallucinations by degrading pre-existing knowledge through semantic interference. The study proposes self-distillation and parameter freezing techniques to mitigate this problem while preserving task performance.