AINeutralarXiv – CS AI · 9h ago6/10
🧠
Beyond LoRA vs. Full Fine-Tuning: Gradient-Guided Optimizer Routing for LLM Adaptation
Researchers propose MoLF (Mixture of LoRA and Full Fine-Tuning), a hybrid framework that dynamically routes gradient updates between full fine-tuning and low-rank adaptation during LLM training. The approach addresses limitations of relying solely on either method, achieving competitive or superior performance across diverse tasks while maintaining training stability and memory efficiency.