y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#parameter-efficient-fine-tuning News & Analysis

5 articles tagged with #parameter-efficient-fine-tuning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

5 articles
AIBullisharXiv โ€“ CS AI ยท Apr 106/10
๐Ÿง 

FLeX: Fourier-based Low-rank EXpansion for multilingual transfer

Researchers propose FLeX, a parameter-efficient fine-tuning approach combining LoRA, advanced optimizers, and Fourier-based regularization to enable cross-lingual code generation across programming languages. The method achieves 42.1% pass@1 on Java tasks compared to a 34.2% baseline, demonstrating significant improvements in multilingual transfer without full model retraining.

๐Ÿง  Llama
AIBullisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

One Model, Many Skills: Parameter-Efficient Fine-Tuning for Multitask Code Analysis

Researchers conducted the first comprehensive evaluation of parameter-efficient fine-tuning (PEFT) for multi-task code analysis, showing that a single PEFT module can match full fine-tuning performance while reducing computational costs by up to 85%. The study found that even 1B-parameter models with multi-task PEFT outperform large general-purpose LLMs like DeepSeek and CodeLlama on code analysis tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA

TiTok is a new framework for transferring LoRA (Low-Rank Adaptation) parameters between different Large Language Model backbones without requiring additional training data or discriminator models. The method uses token-level contrastive learning to achieve 4-10% performance gains over existing approaches in parameter-efficient fine-tuning scenarios.

AIBullisharXiv โ€“ CS AI ยท Feb 276/104
๐Ÿง 

Agentic AI for Intent-driven Optimization in Cell-free O-RAN

Researchers propose an agentic AI framework using multiple LLM-based agents to optimize cell-free Open RAN networks through intent-driven automation. The system reduces active radio units by 42% in energy-saving mode while cutting memory usage by 92% through parameter-efficient fine-tuning.