AIBullisharXiv โ CS AI ยท 10h ago7/10
๐ง
Boosting Large Language Models with Mask Fine-Tuning
Researchers introduce Mask Fine-Tuning (MFT), a novel approach that improves large language model performance by applying binary masks to optimized models without updating weights. The method achieves consistent performance gains across different domains and model architectures, with average improvements of 2.70/4.15 in IFEval benchmarks for LLaMA models.