AIBullisharXiv โ CS AI ยท 5h ago1
๐ง
Is Retraining-Free Enough? The Necessity of Router Calibration for Efficient MoE Compression
Researchers propose Router Knowledge Distillation (Router KD) to improve retraining-free compression of Mixture-of-Experts (MoE) models by calibrating routers while keeping expert parameters unchanged. The method addresses router-expert mismatch issues that cause performance degradation in compressed MoE models, showing particularly strong results in fine-grained MoE architectures.