y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#fine-tuning News & Analysis

148 articles tagged with #fine-tuning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

148 articles
AIBullisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

Efficient Adversarial Training via Criticality-Aware Fine-Tuning

Researchers introduce Criticality-Aware Adversarial Training (CAAT), a parameter-efficient method that identifies and fine-tunes only the most robustness-critical parameters in Vision Transformers, achieving 94.3% of standard adversarial training robustness while tuning just 6% of model parameters. This breakthrough addresses the computational bottleneck preventing large-scale adversarial training deployment.

AIBullisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

Preventing Safety Drift in Large Language Models via Coupled Weight and Activation Constraints

Researchers propose Coupled Weight and Activation Constraints (CWAC), a novel safety alignment technique for large language models that simultaneously constrains weight updates and regularizes activation patterns to prevent harmful outputs during fine-tuning. The method demonstrates that existing single-constraint approaches are insufficient and outperforms baselines across multiple LLMs while maintaining task performance.

AIBearisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Is There Knowledge Left to Extract? Evidence of Fragility in Medically Fine-Tuned Vision-Language Models

Researchers evaluated domain-specific fine-tuning of vision-language models (VLMs) on medical imaging tasks and found that performance degrades significantly with task complexity, with medical fine-tuning providing no consistent advantage. The study reveals that these models exhibit fragility and high sensitivity to prompt variations, questioning the reliability of VLMs for high-stakes medical applications.

๐Ÿง  GPT-5
AIBearisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Powerful Training-Free Membership Inference Against Autoregressive Language Models

Researchers have developed EZ-MIA, a training-free membership inference attack that dramatically improves detection of memorized data in fine-tuned language models by analyzing probability shifts at error positions. The method achieves 3.8x higher detection rates than previous approaches on GPT-2 and demonstrates that privacy risks in fine-tuned models are substantially greater than previously understood.

๐Ÿง  Llama
AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Multi-Model Synthetic Training for Mission-Critical Small Language Models

Researchers demonstrate a cost-effective approach to training specialized small language models by using LLMs as one-time teachers to generate synthetic training data. By converting 3.2 billion maritime vessel tracking records into 21,543 QA pairs, they fine-tuned Qwen2.5-7B to achieve 75% accuracy on maritime tasks at a fraction of the cost of deploying larger models, establishing a reproducible framework for domain-specific AI applications.

๐Ÿง  GPT-4
AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Disambiguation-Centric Finetuning Makes Enterprise Tool-Calling LLMs More Realistic and Less Risky

Researchers introduce DiaFORGE, a three-stage framework for training LLMs to reliably invoke enterprise APIs by focusing on disambiguation between similar tools and underspecified arguments. Fine-tuned models achieved 27-49 percentage points higher tool-invocation success than GPT-4o and Claude-3.5-Sonnet, with an open corpus of 5,000 production-grade API specifications released for further research.

๐Ÿง  GPT-4๐Ÿง  Claude
AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Proximal Supervised Fine-Tuning

Researchers propose Proximal Supervised Fine-Tuning (PSFT), a new method that applies trust-region constraints from reinforcement learning to improve how foundation models adapt to new tasks. The technique maintains model capabilities while fine-tuning, outperforming standard supervised fine-tuning on out-of-domain generalization tasks.

AIBullisharXiv โ€“ CS AI ยท 3d ago7/10
๐Ÿง 

SkillFactory: Self-Distillation For Learning Cognitive Behaviors

SkillFactory is a novel fine-tuning method that enables language models to learn cognitive behaviors like verification and backtracking without requiring distillation from stronger models. The approach uses self-rearranged training samples during supervised fine-tuning to prime models for subsequent reinforcement learning, resulting in better generalization and robustness.

AIBullisharXiv โ€“ CS AI ยท 6d ago7/10
๐Ÿง 

Less is More: Data-Efficient Adaptation for Controllable Text-to-Video Generation

Researchers demonstrate a data-efficient fine-tuning method for text-to-video diffusion models that enables new generative controls using sparse, low-quality synthetic data rather than expensive, photorealistic datasets. Counterintuitively, models trained on simple synthetic data outperform those trained on high-fidelity real data, supported by both empirical results and theoretical justification.

AIBullisharXiv โ€“ CS AI ยท Apr 77/10
๐Ÿง 

Unlocking Prompt Infilling Capability for Diffusion Language Models

Researchers have developed a method to unlock prompt infilling capabilities in masked diffusion language models by extending full-sequence masking during supervised fine-tuning, rather than the conventional response-only masking. This breakthrough enables models to automatically generate effective prompts that match or exceed manually designed templates, suggesting training practices rather than architectural limitations were the primary constraint.

AIBearisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Understanding the Effects of Safety Unalignment on Large Language Models

Research reveals that two methods for removing safety guardrails from large language models - jailbreak-tuning and weight orthogonalization - have significantly different impacts on AI capabilities. Weight orthogonalization produces models that are far more capable of assisting with malicious activities while retaining better performance, though supervised fine-tuning can help mitigate these risks.

AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Improving Role Consistency in Multi-Agent Collaboration via Quantitative Role Clarity

Researchers developed a quantitative method to improve role consistency in multi-agent AI systems by introducing a role clarity matrix that measures alignment between agents' assigned roles and their actual behavior. The approach significantly reduced role overstepping rates from 46.4% to 8.4% in Qwen models and from 43.4% to 0.2% in Llama models during ChatDev system experiments.

๐Ÿง  Llama
AINeutralarXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

One Model to Translate Them All? A Journey to Mount Doom for Multilingual Model Merging

Researchers studied weight-space model merging for multilingual machine translation and found it significantly degrades performance when target languages differ. Analysis reveals that fine-tuning redistributes rather than sharpens language selectivity in neural networks, increasing representational divergence in higher layers that govern text generation.

AINeutralarXiv โ€“ CS AI ยท Mar 267/10
๐Ÿง 

Mitigating Many-Shot Jailbreaking

Researchers have developed techniques to mitigate many-shot jailbreaking (MSJ) attacks on large language models, where attackers use numerous examples to override safety training. Combined fine-tuning and input sanitization approaches significantly reduce MSJ effectiveness while maintaining normal model performance.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Boosting Large Language Models with Mask Fine-Tuning

Researchers introduce Mask Fine-Tuning (MFT), a novel approach that improves large language model performance by applying binary masks to optimized models without updating weights. The method achieves consistent performance gains across different domains and model architectures, with average improvements of 2.70/4.15 in IFEval benchmarks for LLaMA models.

AIBearisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Narrow Fine-Tuning Erodes Safety Alignment in Vision-Language Agents

Research reveals that fine-tuning aligned vision-language AI models on narrow harmful datasets causes severe safety degradation that generalizes across unrelated tasks. The study shows multimodal models exhibit 70% higher misalignment than text-only evaluation suggests, with even 10% harmful training data causing substantial alignment loss.

AIBullisharXiv โ€“ CS AI ยท Mar 167/10
๐Ÿง 

Learnability and Privacy Vulnerability are Entangled in a Few Critical Weights

Researchers discovered that privacy vulnerabilities in neural networks exist in only a small fraction of weights, but these same weights are critical for model performance. They developed a new approach that preserves privacy by rewinding and fine-tuning only these critical weights instead of retraining entire networks, maintaining utility while defending against membership inference attacks.

AINeutralarXiv โ€“ CS AI ยท Mar 167/10
๐Ÿง 

Aligning Large Language Model Agents with Rational and Moral Preferences: A Supervised Fine-Tuning Approach

Researchers developed a supervised fine-tuning approach to align large language model agents with specific economic preferences, addressing systematic deviations from rational behavior in strategic environments. The study demonstrates how LLM agents can be trained to follow either self-interested or morally-guided strategies, producing distinct outcomes in economic games and pricing scenarios.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

Small Language Models for Efficient Agentic Tool Calling: Outperforming Large Models with Targeted Fine-tuning

Researchers demonstrated that a fine-tuned small language model (SLM) with 350M parameters can significantly outperform large language models like ChatGPT in tool-calling tasks, achieving a 77.55% pass rate versus ChatGPT's 26%. This breakthrough suggests organizations can reduce AI operational costs while maintaining or improving performance through targeted fine-tuning of smaller models.

๐Ÿข Meta๐Ÿข Hugging Face๐Ÿง  ChatGPT
AINeutralarXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

An Empirical Study and Theoretical Explanation on Task-Level Model-Merging Collapse

Researchers have identified a phenomenon called 'merging collapse' where combining independently fine-tuned large language models leads to catastrophic performance degradation. The study reveals that representational incompatibility between tasks, rather than parameter conflicts, is the primary cause of merging failures.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

Efficiently Aligning Draft Models via Parameter- and Data-Efficient Adaptation

Researchers introduce Efficient Draft Adaptation (EDA), a framework that significantly reduces the cost of adapting draft models for speculative decoding when target LLMs are fine-tuned. EDA achieves superior performance through decoupled architecture, data regeneration, and smart sample selection while requiring substantially less training resources than full retraining.

AINeutralarXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

A Consensus-Driven Multi-LLM Pipeline for Missing-Person Investigations

Researchers have developed Guardian, an AI system using multiple large language models (LLMs) to assist in missing-person investigations during the critical first 72 hours. The system employs a consensus-driven pipeline that coordinates specialized LLM models for information extraction and processing, with fine-tuning using QLoRA methodology.

AIBearisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

Window-based Membership Inference Attacks Against Fine-tuned Large Language Models

Researchers developed WBC (Window-Based Comparison), a new membership inference attack method that significantly outperforms existing approaches by analyzing localized patterns in Large Language Models rather than global signals. The technique achieves 2-3 times better detection rates and exposes critical privacy vulnerabilities in fine-tuned LLMs through sliding window analysis and binary voting mechanisms.

AIBullisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

FLoRG: Federated Fine-tuning with Low-rank Gram Matrices and Procrustes Alignment

Researchers propose FLoRG, a new federated learning framework for efficiently fine-tuning large language models that reduces communication overhead by up to 2041x while improving accuracy. The method uses Gram matrix aggregation and Procrustes alignment to solve aggregation errors and decomposition drift issues in distributed AI training.

Page 1 of 6Next โ†’