AIBullisharXiv โ CS AI ยท 6h ago11
๐ง
DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher
Researchers propose DUET, a new distillation-based method for LLM unlearning that removes undesirable knowledge from AI models without full retraining. The technique combines computational efficiency with security advantages, achieving better performance in both knowledge removal and utility preservation while being significantly more data-efficient than existing methods.