35 articles tagged with #lora. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBearisharXiv – CS AI · Mar 177/10
🧠Research reveals that fine-tuning aligned vision-language AI models on narrow harmful datasets causes severe safety degradation that generalizes across unrelated tasks. The study shows multimodal models exhibit 70% higher misalignment than text-only evaluation suggests, with even 10% harmful training data causing substantial alignment loss.
AIBullisharXiv – CS AI · Mar 177/10
🧠Researchers introduce MapReduce LoRA and Reward-aware Token Embedding (RaTE) to optimize multiple preferences in generative AI models without degrading performance across dimensions. The methods show significant improvements across text-to-image, text-to-video, and language tasks, with gains ranging from 4.3% to 136.7% on various benchmarks.
🧠 Llama🧠 Stable Diffusion
AIBullisharXiv – CS AI · Mar 97/10
🧠Researchers propose FLoRG, a new federated learning framework for efficiently fine-tuning large language models that reduces communication overhead by up to 2041x while improving accuracy. The method uses Gram matrix aggregation and Procrustes alignment to solve aggregation errors and decomposition drift issues in distributed AI training.
AIBullisharXiv – CS AI · Mar 57/10
🧠Researchers have developed Spectral Surgery, a training-free method to improve LoRA (Low-Rank Adaptation) model performance by reweighting singular values based on gradient sensitivity. The technique achieves significant performance gains (up to +4.4 points on CommonsenseQA) by adjusting only about 1,000 scalar coefficients without requiring retraining.
🧠 Llama
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers introduce NoRA (Non-linear Rank Adaptation), a new parameter-efficient fine-tuning method that overcomes the 'linear ceiling' limitations of traditional LoRA by using SiLU gating and structural dropout. NoRA achieves superior performance at rank 64 compared to LoRA at rank 512, demonstrating significant efficiency gains in complex reasoning tasks.
AINeutralarXiv – CS AI · 1d ago6/10
🧠Researchers propose Polynomial Expansion Rank Adaptation (PERA), a novel fine-tuning method that enhances Low-Rank Adaptation (LoRA) by incorporating high-order polynomial interactions into low-rank factors. PERA improves the expressive capacity of LLM fine-tuning without increasing computational costs, demonstrating consistent performance gains across benchmarks while maintaining the efficiency benefits of rank-constrained adaptation.
AINeutralarXiv – CS AI · 1d ago6/10
🧠Researchers propose an SVD-based orthogonal subspace projection method for continual machine unlearning that prevents interference between sequential deletion tasks in neural networks. The approach maintains model performance on retained data while effectively removing influence of unlearned data, addressing a critical limitation of naive LoRA fusion methods.
AINeutralarXiv – CS AI · 2d ago6/10
🧠Researchers present a unified framework for understanding how different methods control large language models—including fine-tuning, LoRA, and activation interventions—revealing a fundamental trade-off between steering strength and output quality. The analysis explains this through an activation manifold perspective and introduces SPLIT, a new steering method that improves control while better preserving model coherence.
AIBullisharXiv – CS AI · 6d ago6/10
🧠Researchers introduce LoRA-DA, a new initialization method for Low-Rank Adaptation that leverages target-domain data and theoretical optimization principles to improve fine-tuning performance. The method outperforms existing initialization approaches across multiple benchmarks while maintaining computational efficiency.
AIBullisharXiv – CS AI · 6d ago6/10
🧠Researchers propose FLeX, a parameter-efficient fine-tuning approach combining LoRA, advanced optimizers, and Fourier-based regularization to enable cross-lingual code generation across programming languages. The method achieves 42.1% pass@1 on Java tasks compared to a 34.2% baseline, demonstrating significant improvements in multilingual transfer without full model retraining.
🧠 Llama
AIBullisharXiv – CS AI · Apr 66/10
🧠Researchers developed new compression techniques for LLM-generated text, achieving massive compression ratios through domain-adapted LoRA adapters and an interactive 'Question-Asking' protocol. The QA method uses binary questions to transfer knowledge between small and large models, achieving compression ratios of 0.0006-0.004 while recovering 23-72% of capability gaps.
AINeutralarXiv – CS AI · Mar 276/10
🧠Researchers introduce ReLope, a new routing method for multimodal large language models that uses KL-regularized LoRA probes and attention mechanisms to improve cost-performance balance. The method addresses the challenge of degraded probe performance when visual inputs are added to text-only LLMs.
AINeutralarXiv – CS AI · Mar 176/10
🧠Researchers propose a new framework for improving safety in multimodal AI models by targeting unsafe relationships between objects rather than removing entire concepts. The approach uses parameter-efficient edits to suppress dangerous combinations while preserving benign uses of the same objects and relations.
AIBullisharXiv – CS AI · Mar 176/10
🧠Researchers propose FedTreeLoRA, a new framework for privacy-preserving fine-tuning of large language models that addresses both statistical and functional heterogeneity across federated learning clients. The method uses tree-structured aggregation to allow layer-wise specialization while maintaining shared consensus on foundational layers, significantly outperforming existing personalized federated learning approaches.
AIBullisharXiv – CS AI · Mar 176/10
🧠Researchers introduce IGU-LoRA, a new parameter-efficient fine-tuning method for large language models that adaptively allocates ranks across layers using integrated gradients and uncertainty-aware scoring. The approach addresses limitations of existing methods like AdaLoRA by providing more stable and accurate layer importance estimates, consistently outperforming baselines across diverse tasks.
AIBullisharXiv – CS AI · Mar 126/10
🧠Research demonstrates that LoRA fine-tuning of large language models significantly improves text-to-speech systems, achieving up to 0.42 DNS-MOS gains and 34% SNR improvements when training data has sufficient acoustic diversity. The study establishes LoRA as an effective mechanism for speaker adaptation in compact LLM-based TTS systems, outperforming frozen base models across perceptual quality, speaker fidelity, and signal quality metrics.
AIBullisharXiv – CS AI · Mar 55/10
🧠Researchers have developed a new approach using multiplicative LoRA (Low-Rank Adaptation) weights for neural field representation learning, achieving improved quality in reconstruction, generation, and analysis tasks. The method constrains optimization space through pre-trained base models, creating structured weight representations that outperform existing weight-space methods when used with latent diffusion models.
AIBullisharXiv – CS AI · Mar 45/103
🧠Researchers propose Q-LoRA, a quantum-enhanced fine-tuning method that integrates quantum neural networks into LoRA adapters for improved AI-generated content detection. The study also introduces H-LoRA, a classical variant using Hilbert transforms that achieves similar 5%+ accuracy improvements over standard LoRA at lower computational cost.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers propose DeLo, a new framework using dual-decomposed low-rank expert architecture to help Large Multimodal Models adapt to real-world scenarios with incomplete data. The system addresses continual missing modality learning by preventing interference between different data types and tasks through specialized routing and memory mechanisms.
AIBullisharXiv – CS AI · Mar 37/105
🧠Researchers introduce ALTER, a new framework for efficiently "unlearning" specific knowledge from large language models while preserving their overall utility. The system uses asymmetric LoRA architecture to selectively forget targeted information with 95% effectiveness while maintaining over 90% model utility, significantly outperforming existing methods.
AIBullisharXiv – CS AI · Mar 36/104
🧠TiTok is a new framework for transferring LoRA (Low-Rank Adaptation) parameters between different Large Language Model backbones without requiring additional training data or discriminator models. The method uses token-level contrastive learning to achieve 4-10% performance gains over existing approaches in parameter-efficient fine-tuning scenarios.
AIBullisharXiv – CS AI · Mar 26/1013
🧠Researchers propose FedRot-LoRA, a new framework that solves rotational misalignment issues in federated learning for large language models. The solution uses orthogonal transformations to align client updates before aggregation, improving training stability and performance without increasing communication costs.
AIBullisharXiv – CS AI · Mar 26/1018
🧠Researchers introduce LoRA-Pre, a memory-efficient optimizer that reduces memory overhead in training large language models by using low-rank approximation of momentum states. The method achieves superior performance on Llama models from 60M to 1B parameters while using only 1/8 the rank of baseline methods.
AIBullishHugging Face Blog · Jun 196/106
🧠The article discusses fine-tuning FLUX.1-dev using LoRA (Low-Rank Adaptation) techniques on consumer-grade hardware. This approach makes advanced AI model customization more accessible to individual developers and smaller organizations without requiring enterprise-level computing resources.
AIBullishHugging Face Blog · Dec 56/105
🧠The article title suggests a breakthrough in LoRA (Low-Rank Adaptation) inference performance, claiming a 300% speed improvement by eliminating cold boot issues. This appears to be a technical advancement in AI model optimization that could significantly impact AI inference efficiency.