22 articles tagged with #knowledge-distillation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv – CS AI · Mar 177/10
🧠Researchers developed Token-Selective Dual Knowledge Distillation (TSD-KD), a new framework that improves AI reasoning by allowing smaller models to learn from larger ones more effectively. The method achieved up to 54.4% better accuracy than baseline models on reasoning benchmarks, with student models sometimes outperforming their teachers by up to 20.3%.
AIBullisharXiv – CS AI · Mar 57/10
🧠Researchers from KAIST propose AMiD, a new knowledge distillation framework that improves the efficiency of training smaller language models by transferring knowledge from larger models. The technique introduces α-mixture assistant distribution to address training instability and capacity gaps in existing approaches.
AIBullisharXiv – CS AI · Mar 46/102
🧠Researchers propose Router Knowledge Distillation (Router KD) to improve retraining-free compression of Mixture-of-Experts (MoE) models by calibrating routers while keeping expert parameters unchanged. The method addresses router-expert mismatch issues that cause performance degradation in compressed MoE models, showing particularly strong results in fine-grained MoE architectures.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers introduce SiNGER, a new knowledge distillation framework for Vision Transformers that suppresses harmful high-norm artifacts while preserving informative signals. The technique uses nullspace-guided perturbation and LoRA-based adapters to achieve state-of-the-art performance in downstream tasks.
AINeutralarXiv – CS AI · 2d ago6/10
🧠Researchers propose a semantic bootstrapping framework that transfers knowledge from large language models into interpretable symbolic Tsetlin Machines, enabling text classification systems to achieve BERT-comparable performance while remaining fully transparent and computationally efficient without runtime LLM dependencies.
AINeutralarXiv – CS AI · 3d ago6/10
🧠Researchers propose a novel framework for improving symbolic distillation of neural networks by regularizing teacher models for functional smoothness using Jacobian and Lipschitz penalties. This approach addresses the core challenge that standard neural networks learn complex, irregular functions while symbolic regression models prioritize simplicity, resulting in poor knowledge transfer. Results across 20 datasets demonstrate statistically significant improvements in predictive accuracy for distilled symbolic models.
AIBullisharXiv – CS AI · 4d ago6/10
🧠Researchers introduce WAND, a framework that reduces computational and memory costs of autoregressive text-to-speech models by replacing full self-attention with windowed attention combined with knowledge distillation. The approach achieves up to 66.2% KV cache memory reduction while maintaining speech quality, addressing a critical scalability bottleneck in modern AR-TTS systems.
AIBullishDecrypt – AI · 4d ago6/10
🧠A developer has created Qwopus, a distilled version of Claude Opus 4.6's reasoning capabilities embedded into a local Qwen model that runs on consumer hardware. The tool democratizes access to advanced AI reasoning by enabling users with modest computing resources to run sophisticated models locally, challenging the centralized AI infrastructure paradigm.
🧠 Claude🧠 Opus
AIBullisharXiv – CS AI · Mar 176/10
🧠Researchers developed a resource-efficient framework for compressing large language models using knowledge distillation and chain-of-thought reinforcement learning. The method successfully compressed Qwen 3B to 0.5B while retaining 70-95% of performance across English, Spanish, and coding tasks, making AI models more suitable for resource-constrained deployments.
AIBullisharXiv – CS AI · Mar 166/10
🧠Researchers introduce a new knowledge distillation framework that improves training of smaller AI models by using intermediate representations from large language models rather than their final outputs. The method shows consistent improvements across reasoning benchmarks, particularly when training data is limited, by providing cleaner supervision signals.
AINeutralarXiv – CS AI · Mar 126/10
🧠Researchers propose Contract And Conquer (CAC), a new method for provably generating adversarial examples against black-box neural networks using knowledge distillation and search space contraction. The approach provides theoretical guarantees for finding adversarial examples within a fixed number of iterations and outperforms existing methods on ImageNet datasets including vision transformers.
AIBullisharXiv – CS AI · Mar 37/105
🧠Researchers have developed KDFlow, a new framework for compressing large language models that achieves 1.44x to 6.36x faster training speeds compared to existing knowledge distillation methods. The framework uses a decoupled architecture that optimizes both training and inference efficiency while reducing communication costs through innovative data transfer techniques.
AIBullisharXiv – CS AI · Mar 36/104
🧠Researchers propose Concrete Score Distillation (CSD), a new knowledge distillation method that improves efficiency of large language models by better preserving logit information compared to traditional softmax-based approaches. CSD demonstrates consistent performance improvements across multiple models including GPT-2, OpenLLaMA, and GEMMA while maintaining training stability.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers have developed Decoder-based Sense Knowledge Distillation (DSKD), a new framework that integrates lexical resources into decoder-style large language models during training. The method enhances knowledge distillation performance while enabling generative models to inherit structured semantics without requiring dictionary lookup during inference.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers propose RL-aware distillation (RLAD), a new method to efficiently transfer knowledge from large language models to smaller ones during reinforcement learning training. The approach uses Trust Region Ratio Distillation (TRRD) to selectively guide student models only when it improves policy updates, outperforming existing distillation methods across reasoning benchmarks.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers propose Struct-SQL, a knowledge distillation framework that improves Small Language Models for Text-to-SQL tasks by using structured Chain-of-Thought reasoning instead of unstructured approaches. The method achieves an 8.1% improvement over baseline distillation, primarily by reducing syntactic errors through formal query execution plan blueprints.
AIBullishHugging Face Blog · Aug 16/106
🧠Stability AI has open-sourced knowledge distillation code and model weights for SD-Small and SD-Tiny, making smaller and more efficient versions of Stable Diffusion available to the community. This release enables developers to run image generation models with reduced computational requirements while maintaining reasonable quality.
AINeutralarXiv – CS AI · Apr 64/10
🧠Researchers developed EWAD and CPDP techniques for improving multi-teacher knowledge distillation in low-resource abstractive summarization tasks. The study across Bangla and cross-lingual datasets shows logit-level knowledge distillation provides most reliable gains, while complex distillation improves short summaries but degrades longer outputs.
AIBullisharXiv – CS AI · Mar 275/10
🧠Researchers developed a method to transfer knowledge from traditional machine learning pipelines to neural networks, specifically converting random forest classifiers into student neural networks. Testing on 100 OpenML tasks showed that neural networks can successfully mimic random forest performance when proper hyperparameters are selected.
AINeutralarXiv – CS AI · Mar 264/10
🧠Researchers propose Text-guided Multi-view Knowledge Distillation (TMKD), a new method that uses dual-modality teachers (visual and text) to improve knowledge transfer from large AI models to smaller ones. The approach enhances visual teachers with multi-view inputs and incorporates CLIP text guidance, achieving up to 4.49% performance improvements across five benchmarks.
AINeutralarXiv – CS AI · Mar 164/10
🧠Researchers introduce Steve-Evolving, a new AI framework for open-world embodied agents that uses fine-grained diagnosis and knowledge distillation to improve long-horizon task performance. The system organizes interaction experiences into structured tuples and continuously evolves without model parameter updates, showing improvements in Minecraft testing environments.
AINeutralarXiv – CS AI · Mar 34/103
🧠Researchers propose Rejuvenated Cross-Entropy for Knowledge Distillation (RCE-KD) to improve knowledge distillation in recommender systems by addressing limitations of Cross-Entropy loss when distilling teacher model rankings. The method splits teacher's top items into subsets and uses adaptive sampling to better align with theoretical assumptions.