9 articles tagged with #self-distillation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท 4d ago7/10
๐ง SkillFactory is a novel fine-tuning method that enables language models to learn cognitive behaviors like verification and backtracking without requiring distillation from stronger models. The approach uses self-rearranged training samples during supervised fine-tuning to prime models for subsequent reinforcement learning, resulting in better generalization and robustness.
AIBullisharXiv โ CS AI ยท Mar 277/10
๐ง Researchers developed GoldiCLIP, a data-efficient vision-language model that achieves state-of-the-art performance using only 30 million images - 300x less data than leading methods. The framework combines three key innovations including text-conditioned self-distillation, VQA-integrated encoding, and uncertainty-based loss weighting to significantly improve image-text retrieval tasks.
AIBullisharXiv โ CS AI ยท Mar 267/10
๐ง Researchers propose MTP-D, a self-distillation method that improves Multi-Token Prediction for Large Language Models, achieving 7.5% better acceptance rates and up to 220% inference speedup. The technique addresses key challenges in training multiple prediction heads while preserving main model performance.
AIBullisharXiv โ CS AI ยท Mar 167/10
๐ง Researchers developed a new method for training AI language models using multi-turn user conversations through self-distillation, leveraging follow-up messages to improve model alignment. Testing on real-world WildChat conversations showed improvements in alignment and instruction-following benchmarks while enabling personalization without explicit feedback.
AIBullisharXiv โ CS AI ยท 3d ago6/10
๐ง Researchers introduce Skill-SD, a novel training framework for multi-turn LLM agents that improves sample efficiency by converting successful agent trajectories into dynamic natural language skills that condition a teacher model. The approach combines reinforcement learning with self-distillation and achieves significant performance improvements over baseline methods on benchmark tasks.
AINeutralarXiv โ CS AI ยท Mar 266/10
๐ง Researchers introduce SPARE, a new machine unlearning method for text-to-image diffusion models that efficiently removes unwanted concepts while preserving model performance. The two-stage approach uses parameter localization and self-distillation to achieve selective concept erasure with minimal computational overhead.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce Truncated-Reasoning Self-Distillation (TRSD), a post-training method that enables AI language models to maintain accuracy while using shorter reasoning traces. The technique reduces computational costs by training models to produce correct answers from partial reasoning, achieving significant inference-time efficiency gains without sacrificing performance.
AIBullisharXiv โ CS AI ยท Mar 37/106
๐ง Researchers propose Attention Smoothing Unlearning (ASU), a new framework that helps Large Language Models forget sensitive or copyrighted content without losing overall performance. The method uses self-distillation and attention smoothing to erase specific knowledge while maintaining coherent responses, outperforming existing unlearning techniques.
AINeutralarXiv โ CS AI ยท Mar 35/104
๐ง Researchers developed UTICA, a new foundation model for time series classification that uses non-contrastive self-distillation methods adapted from computer vision. The model achieves state-of-the-art performance on UCR and UEA benchmarks by learning temporal patterns through a student-teacher framework with data augmentation and patch masking.