AIBullisharXiv – CS AI · 10h ago6/10
🧠
CERSA: Cumulative Energy-Retaining Subspace Adaptation for Memory-Efficient Fine-Tuning
Researchers introduce CERSA, a novel parameter-efficient fine-tuning method that uses singular value decomposition to reduce memory consumption while fine-tuning large language models. The technique outperforms existing methods like LoRA by capturing more rank characteristics of weight modifications while requiring substantially less memory for frozen weights.