y0news
#distillation2 articles
2 articles
AIBullisharXiv โ€“ CS AI ยท 6h ago11
๐Ÿง 

DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher

Researchers propose DUET, a new distillation-based method for LLM unlearning that removes undesirable knowledge from AI models without full retraining. The technique combines computational efficiency with security advantages, achieving better performance in both knowledge removal and utility preservation while being significantly more data-efficient than existing methods.

AIBullisharXiv โ€“ CS AI ยท 6h ago6
๐Ÿง 

Embodiment-Aware Generalist Specialist Distillation for Unified Humanoid Whole-Body Control

Researchers introduce EAGLE, a reinforcement learning framework that creates unified control policies for multiple different humanoid robots without per-robot tuning. The system uses iterative generalist-specialist distillation to enable a single AI controller to manage diverse humanoid embodiments and support complex behaviors beyond basic walking.