y0news
#trustworthy-ai1 article
1 articles
AIBullisharXiv โ€“ CS AI ยท 4h ago10
๐Ÿง 

DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher

Researchers propose DUET, a new distillation-based method for LLM unlearning that removes undesirable knowledge from AI models without full retraining. The technique combines computational efficiency with security advantages, achieving better performance in both knowledge removal and utility preservation while being significantly more data-efficient than existing methods.