AINeutralarXiv โ CS AI ยท 5h ago1
๐ง
From Fewer Samples to Fewer Bits: Reframing Dataset Distillation as Joint Optimization of Precision and Compactness
Researchers propose QuADD (Quantization-aware Dataset Distillation), a new framework that jointly optimizes dataset compression and precision to create more efficient synthetic training datasets. The method integrates differentiable quantization within the distillation process, achieving better accuracy per bit than existing approaches on image classification and 3GPP beam management tasks.